Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aggregate_tests.py fails #2168

Open
VickyMerzOwn opened this issue Nov 6, 2022 · 4 comments
Open

aggregate_tests.py fails #2168

VickyMerzOwn opened this issue Nov 6, 2022 · 4 comments

Comments

@VickyMerzOwn
Copy link
Contributor

Description of issue or feature request:
Getting the following errors on running python3 aggregate_tests.py from within the tests/ directory.

Current behavior:

======================================================================
ERROR: test_metadata_generation (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: test_metadata_generation
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/unittest/loader.py", line 436, in _find_test_path
    module = self._get_module_from_name(name)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/unittest/loader.py", line 377, in _get_module_from_name
    __import__(name)
  File "/Users/usr1/Desktop/tuf/myfork/python-tuf/tests/test_metadata_generation.py", line 10, in <module>
    from tests import utils
ModuleNotFoundError: No module named 'tests'

Along with the following error:

======================================================================
ERROR: test_metafile_serialization (test_metadata_serialization.TestSerialization) (case='length 0')
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/usr1/Desktop/tuf/myfork/python-tuf/tests/utils.py", line 66, in wrapper
    function(test_cls, data)
  File "/Users/usr1/Desktop/tuf/myfork/python-tuf/tests/test_metadata_serialization.py", line 324, in test_metafile_serialization
    metafile = MetaFile.from_dict(copy.copy(case_dict))
  File "/Users/usr1/.local/share/virtualenvs/flask-sqlalchemy/lib/python3.8/site-packages/tuf/api/metadata.py", line 1123, in from_dict
    return cls(version, length, hashes, meta_dict)
  File "/Users/usr1/.local/share/virtualenvs/flask-sqlalchemy/lib/python3.8/site-packages/tuf/api/metadata.py", line 1088, in __init__
    self._validate_length(length)
  File "/Users/usr1/.local/share/virtualenvs/flask-sqlalchemy/lib/python3.8/site-packages/tuf/api/metadata.py", line 1056, in _validate_length
    raise ValueError(f"Length must be > 0, got {length}")
ValueError: Length must be > 0, got 0

Expected behavior:

No errors on running python3 aggregate_tests.py from within tests/

@VickyMerzOwn
Copy link
Contributor Author

For the first failure, can I simply perform import utils directly instead of from tests import utils?

@jku
Copy link
Member

jku commented Nov 6, 2022

The tests expect tuf to be installed, and will test that installed version. I'm not sure if that makes sense... but that's how it currently works. The second error implies you are running tests from one version of tuf against source from another version -- so a related error I believe.

The recommended way to develop is

  • set up venv, source it
  • pip install -r requirements-dev.txt -- this installs the source dir as editable so the tests find the correct source

I'm guessing tests should start working like that... but I would mind if someone figured out test imports in a way that didn't require that editable install

@jku
Copy link
Member

jku commented Nov 8, 2022

I'm hesitating just closing this. It was likely a "user error" but... The failure mode is so weird if someone does not install the source dir as editable that I'm not surprised that it looks like a bug. Feels like we should be able to do better

@joshuagl
Copy link
Member

Thanks for filing this issue!

We should improve the developer experience when testing, so that a contributor can just run the aggregate_tests.py script as the submitter did.

We need to change the script to support that:

  • Do we want to run tests against the installed tuf, or the version in the scripts directory hierarchy. The latter seems the appropriate default, but there may be a case for running tests against an installed version too.
  • Shouldn't the import work when running tests from the tests/ directory, are we doing the import wrong and expecting the tests to be installed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants