Skip to content

Commit

Permalink
Preparing release version 5.1.3
Browse files Browse the repository at this point in the history
  • Loading branch information
nicoddemus committed Sep 18, 2019
1 parent 892bdd5 commit 1a9f4a5
Show file tree
Hide file tree
Showing 18 changed files with 71 additions and 35 deletions.
16 changes: 16 additions & 0 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,22 @@ with advance notice in the **Deprecations** section of releases.
.. towncrier release notes start
pytest 5.1.3 (2019-09-18)
=========================

Bug Fixes
---------

- `#5807 <https://github.com/pytest-dev/pytest/issues/5807>`_: Fix pypy3.6 (nightly) on windows.


- `#5811 <https://github.com/pytest-dev/pytest/issues/5811>`_: Handle ``--fulltrace`` correctly with ``pytest.raises``.


- `#5819 <https://github.com/pytest-dev/pytest/issues/5819>`_: Windows: Fix regression with conftest whose qualified name contains uppercase
characters (introduced by #5792).


pytest 5.1.2 (2019-08-30)
=========================

Expand Down
1 change: 0 additions & 1 deletion changelog/5807.bugfix.rst

This file was deleted.

1 change: 0 additions & 1 deletion changelog/5811.bugfix.rst

This file was deleted.

2 changes: 0 additions & 2 deletions changelog/5819.bugfix.rst

This file was deleted.

1 change: 1 addition & 0 deletions doc/en/announce/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ Release announcements
:maxdepth: 2


release-5.1.3
release-5.1.2
release-5.1.1
release-5.1.0
Expand Down
23 changes: 23 additions & 0 deletions doc/en/announce/release-5.1.3.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
pytest-5.1.3
=======================================

pytest 5.1.3 has just been released to PyPI.

This is a bug-fix release, being a drop-in replacement. To upgrade::

pip install --upgrade pytest

The full changelog is available at https://docs.pytest.org/en/latest/changelog.html.

Thanks to all who contributed to this release, among them:

* Anthony Sottile
* Bruno Oliveira
* Christian Neumüller
* Daniel Hahler
* Gene Wood
* Hugo


Happy testing,
The pytest Development Team
2 changes: 1 addition & 1 deletion doc/en/assert.rst
Original file line number Diff line number Diff line change
Expand Up @@ -279,7 +279,7 @@ the conftest file:
E vals: 1 != 2
test_foocompare.py:12: AssertionError
1 failed in 0.02s
1 failed in 0.12s
.. _assert-details:
.. _`assert introspection`:
Expand Down
2 changes: 1 addition & 1 deletion doc/en/builtin.rst
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
in python < 3.6 this is a pathlib2.Path
no tests ran in 0.00s
no tests ran in 0.12s
You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like:

Expand Down
6 changes: 3 additions & 3 deletions doc/en/cache.rst
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ If you run this for the first time you will see two failures:
E Failed: bad luck
test_50.py:7: Failed
2 failed, 48 passed in 0.07s
2 failed, 48 passed in 0.12s
If you then run it with ``--lf``:

Expand Down Expand Up @@ -230,7 +230,7 @@ If you run this command for the first time, you can see the print statement:
test_caching.py:20: AssertionError
-------------------------- Captured stdout setup ---------------------------
running expensive computation...
1 failed in 0.02s
1 failed in 0.12s
If you run it a second time, the value will be retrieved from
the cache and nothing will be printed:
Expand All @@ -249,7 +249,7 @@ the cache and nothing will be printed:
E assert 42 == 23
test_caching.py:20: AssertionError
1 failed in 0.02s
1 failed in 0.12s
See the :ref:`cache-api` for more details.

Expand Down
4 changes: 2 additions & 2 deletions doc/en/example/markers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -499,7 +499,7 @@ The output is as follows:
$ pytest -q -s
Mark(name='my_marker', args=(<function hello_world at 0xdeadbeef>,), kwargs={})
.
1 passed in 0.01s
1 passed in 0.12s
We can see that the custom marker has its argument set extended with the function ``hello_world``. This is the key difference between creating a custom marker as a callable, which invokes ``__call__`` behind the scenes, and using ``with_args``.

Expand Down Expand Up @@ -551,7 +551,7 @@ Let's run this without capturing output and see what we get:
glob args=('class',) kwargs={'x': 2}
glob args=('module',) kwargs={'x': 1}
.
1 passed in 0.02s
1 passed in 0.12s
marking platform specific tests with pytest
--------------------------------------------------------------
Expand Down
10 changes: 5 additions & 5 deletions doc/en/example/parametrize.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ This means that we only run 2 tests if we do not pass ``--all``:
$ pytest -q test_compute.py
.. [100%]
2 passed in 0.01s
2 passed in 0.12s
We run only two computations, so we see two dots.
let's run the full monty:
Expand All @@ -73,7 +73,7 @@ let's run the full monty:
E assert 4 < 4
test_compute.py:4: AssertionError
1 failed, 4 passed in 0.02s
1 failed, 4 passed in 0.12s
As expected when running the full range of ``param1`` values
we'll get an error on the last one.
Expand Down Expand Up @@ -343,7 +343,7 @@ And then when we run the test:
E Failed: deliberately failing for demo purposes
test_backends.py:8: Failed
1 failed, 1 passed in 0.02s
1 failed, 1 passed in 0.12s
The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase.

Expand Down Expand Up @@ -454,7 +454,7 @@ argument sets to use for each test function. Let's run it:
E assert 1 == 2
test_parametrize.py:21: AssertionError
1 failed, 2 passed in 0.03s
1 failed, 2 passed in 0.12s
Indirect parametrization with multiple fixtures
--------------------------------------------------------------
Expand All @@ -479,7 +479,7 @@ Running it results in some skips if we don't have all the python interpreters in
========================= short test summary info ==========================
SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.5' not found
SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.7' not found
3 passed, 24 skipped in 0.24s
3 passed, 24 skipped in 0.12s
Indirect parametrization of optional implementations/imports
--------------------------------------------------------------------
Expand Down
8 changes: 4 additions & 4 deletions doc/en/example/simple.rst
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ Let's run this without supplying our new option:
test_sample.py:6: AssertionError
--------------------------- Captured stdout call ---------------------------
first
1 failed in 0.02s
1 failed in 0.12s
And now with supplying a command line option:

Expand All @@ -89,7 +89,7 @@ And now with supplying a command line option:
test_sample.py:6: AssertionError
--------------------------- Captured stdout call ---------------------------
second
1 failed in 0.02s
1 failed in 0.12s
You can see that the command line option arrived in our test. This
completes the basic pattern. However, one often rather wants to process
Expand Down Expand Up @@ -261,7 +261,7 @@ Let's run our little function:
E Failed: not configured: 42
test_checkconfig.py:11: Failed
1 failed in 0.02s
1 failed in 0.12s
If you only want to hide certain exceptions, you can set ``__tracebackhide__``
to a callable which gets the ``ExceptionInfo`` object. You can for example use
Expand Down Expand Up @@ -445,7 +445,7 @@ Now we can profile which test functions execute the slowest:
========================= slowest 3 test durations =========================
0.30s call test_some_are_slow.py::test_funcslow2
0.20s call test_some_are_slow.py::test_funcslow1
0.21s call test_some_are_slow.py::test_funcslow1
0.10s call test_some_are_slow.py::test_funcfast
============================ 3 passed in 0.12s =============================
Expand Down
2 changes: 1 addition & 1 deletion doc/en/example/special.rst
Original file line number Diff line number Diff line change
Expand Up @@ -81,4 +81,4 @@ If you run this without output capturing:
.test other
.test_unit1 method called
.
4 passed in 0.01s
4 passed in 0.12s
10 changes: 5 additions & 5 deletions doc/en/fixture.rst
Original file line number Diff line number Diff line change
Expand Up @@ -361,7 +361,7 @@ Let's execute it:
$ pytest -s -q --tb=no
FFteardown smtp
2 failed in 0.79s
2 failed in 0.12s
We see that the ``smtp_connection`` instance is finalized after the two
tests finished execution. Note that if we decorated our fixture
Expand Down Expand Up @@ -515,7 +515,7 @@ again, nothing much has changed:
$ pytest -s -q --tb=no
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
2 failed in 0.77s
2 failed in 0.12s
Let's quickly create another test module that actually sets the
server URL in its module namespace:
Expand Down Expand Up @@ -692,7 +692,7 @@ So let's just do another run:
test_module.py:13: AssertionError
------------------------- Captured stdout teardown -------------------------
finalizing <smtplib.SMTP object at 0xdeadbeef>
4 failed in 1.69s
4 failed in 0.12s
We see that our two test functions each ran twice, against the different
``smtp_connection`` instances. Note also, that with the ``mail.python.org``
Expand Down Expand Up @@ -1043,7 +1043,7 @@ to verify our fixture is activated and the tests pass:
$ pytest -q
.. [100%]
2 passed in 0.01s
2 passed in 0.12s
You can specify multiple fixtures like this:

Expand Down Expand Up @@ -1151,7 +1151,7 @@ If we run it, we get two passing tests:
$ pytest -q
.. [100%]
2 passed in 0.01s
2 passed in 0.12s
Here is how autouse fixtures work in other scopes:

Expand Down
6 changes: 3 additions & 3 deletions doc/en/getting-started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ Execute the test function with “quiet” reporting mode:
$ pytest -q test_sysexit.py
. [100%]
1 passed in 0.01s
1 passed in 0.12s
Group multiple tests in a class
--------------------------------------------------------------
Expand Down Expand Up @@ -145,7 +145,7 @@ Once you develop multiple tests, you may want to group them into a class. pytest
E + where False = hasattr('hello', 'check')
test_class.py:8: AssertionError
1 failed, 1 passed in 0.02s
1 failed, 1 passed in 0.12s
The first test passed and the second failed. You can easily see the intermediate values in the assertion to help you understand the reason for the failure.

Expand Down Expand Up @@ -180,7 +180,7 @@ List the name ``tmpdir`` in the test function signature and ``pytest`` will look
test_tmpdir.py:3: AssertionError
--------------------------- Captured stdout call ---------------------------
PYTEST_TMPDIR/test_needsfiles0
1 failed in 0.02s
1 failed in 0.12s
More info on tmpdir handling is available at :ref:`Temporary directories and files <tmpdir handling>`.

Expand Down
6 changes: 3 additions & 3 deletions doc/en/parametrize.rst
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@ If we now pass two stringinput values, our test will run twice:
$ pytest -q --stringinput="hello" --stringinput="world" test_strings.py
.. [100%]
2 passed in 0.01s
2 passed in 0.12s
Let's also run with a stringinput that will lead to a failing test:

Expand All @@ -225,7 +225,7 @@ Let's also run with a stringinput that will lead to a failing test:
E + where <built-in method isalpha of str object at 0xdeadbeef> = '!'.isalpha
test_strings.py:4: AssertionError
1 failed in 0.02s
1 failed in 0.12s
As expected our test function fails.

Expand All @@ -239,7 +239,7 @@ list:
s [100%]
========================= short test summary info ==========================
SKIPPED [1] test_strings.py: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:2
1 skipped in 0.00s
1 skipped in 0.12s
Note that when calling ``metafunc.parametrize`` multiple times with different parameter sets, all parameter names across
those sets cannot be duplicated, otherwise an error will be raised.
Expand Down
2 changes: 1 addition & 1 deletion doc/en/unittest.rst
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ Running this test module ...:
$ pytest -q test_unittest_cleandir.py
. [100%]
1 passed in 0.01s
1 passed in 0.12s
... gives us one passed test because the ``initdir`` fixture function
was executed ahead of the ``test_method``.
Expand Down
4 changes: 2 additions & 2 deletions doc/en/warnings.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ them into errors:
E UserWarning: api v1, should use functions from v2
test_show_warnings.py:5: UserWarning
1 failed in 0.02s
1 failed in 0.12s
The same option can be set in the ``pytest.ini`` file using the ``filterwarnings`` ini option.
For example, the configuration below will ignore all user warnings, but will transform
Expand Down Expand Up @@ -407,7 +407,7 @@ defines an ``__init__`` constructor, as this prevents the class from being insta
class Test:
-- Docs: https://docs.pytest.org/en/latest/warnings.html
1 warnings in 0.00s
1 warnings in 0.12s
These warnings might be filtered using the same builtin mechanisms used to filter other types of warnings.

Expand Down

0 comments on commit 1a9f4a5

Please sign in to comment.