You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A Scenario is a bespoke data structure--kind of like a combination test fixture, harness, and test. A Scenario contains some code in one or more modules, configuration, policy, and expectations. While I was able to apply some types in #651, Scenarios are otherwise undocumented--including the conventions and constraints. They are somewhat like a micro-framework, in that regard.
In the context of #1136, I think it's fair to take another look at this concept, since a Scenario is intended to be reusable, and the execution logic would ostensibly live in whatever comes out of #1136.
Coupling
I think my main gripe is that the expectations are too tightly coupled, and reduce the potential value of each scenario by limiting what can be asserted about their behavior.
One part of this is that individual Scenarios cannot be augmented by any given consumer (which is neither bad nor good, but is just a tradeoff); i.e., a Scenario expected to fail when running under package X can only be failing--package X can't make assertions about why and how. Package X could not modify the Scenario with hypothetical configuration that would cause the scenario to succeed, either.
We don't have so many individual Scenarios that it wouldn't be impossible to extract each into their own test and make assertions through the usual means. It is straightforward to write an AVA test macro that allows both augmenting Scenario configuration and allows custom assertions.
Expanded Execution Context
Scenarios are designed to run in a child process, and the STDOUT of that child process is the only thing that can be checked for correctness. This is inflexible. It makes sense for isolation--so we should continue to do this--but it would be nice to be able to inspect data structures communicated via IPC or otherwise. In #772, I was able to run scenarios in worker threads. That said, not every environment will be able to do much more than dump a string.
In-Memory Filesystems
A Scenario should be able to be represented 1:1 as files on disk and options/flags (if this is not true, then we have a problem).
If that's true, we needn't use a real filesystem, and can use an in-memory filesystem instead! In lieu of adding more Scenarios in #772, I used memfs and stored my "scenarios" in its .json format.
This format can include policies and sources (since those live on disk), but it does not include assertions (see 1.) nor any other configuration. Having used both, I prefer using memfs and JSON-based scenarios (there are tradeoffs, of course) over Scenarios. If there's one advantage I'd like to call out, it's that this is far easier to understand; we load files like usual and write AVA tests.
Note
I am not suggesting we abandon running against a real filesystem using a real CLI--that has a lot of value!--but rather that testing the CLI is a separate concern from testing a Scenario.
What we don't have is a good way to reuse these "JSON-based scenarios" in other packages. So my question is: does it make sense to build this out?
What I like about Scenarios is that their sources are easy to read and change--they are just functions. In the case of a "JSON-based scenario", you're working with code embedded in JSON. There are ways to make them easier to work with, of course--and you could even "materialize" a JSON based-scenario to the same one on disk, and execute it directly--just swap out your fs object. That said, it's Scenario sources have very low churn, so it may make sense to de-prioritize this use-case.
The text was updated successfully, but these errors were encountered:
A
Scenario
is a bespoke data structure--kind of like a combination test fixture, harness, and test. AScenario
contains some code in one or more modules, configuration, policy, and expectations. While I was able to apply some types in #651,Scenario
s are otherwise undocumented--including the conventions and constraints. They are somewhat like a micro-framework, in that regard.In the context of #1136, I think it's fair to take another look at this concept, since a
Scenario
is intended to be reusable, and the execution logic would ostensibly live in whatever comes out of #1136.Coupling
I think my main gripe is that the expectations are too tightly coupled, and reduce the potential value of each scenario by limiting what can be asserted about their behavior.
One part of this is that individual
Scenario
s cannot be augmented by any given consumer (which is neither bad nor good, but is just a tradeoff); i.e., aScenario
expected to fail when running under package X can only be failing--package X can't make assertions about why and how. Package X could not modify theScenario
with hypothetical configuration that would cause the scenario to succeed, either.We don't have so many individual
Scenario
s that it wouldn't be impossible to extract each into their own test and make assertions through the usual means. It is straightforward to write an AVA test macro that allows both augmentingScenario
configuration and allows custom assertions.Expanded Execution Context
Scenarios are designed to run in a child process, and the
STDOUT
of that child process is the only thing that can be checked for correctness. This is inflexible. It makes sense for isolation--so we should continue to do this--but it would be nice to be able to inspect data structures communicated via IPC or otherwise. In #772, I was able to run scenarios in worker threads. That said, not every environment will be able to do much more than dump a string.In-Memory Filesystems
A
Scenario
should be able to be represented 1:1 as files on disk and options/flags (if this is not true, then we have a problem).If that's true, we needn't use a real filesystem, and can use an in-memory filesystem instead! In lieu of adding more
Scenario
s in #772, I used memfs and stored my "scenarios" in its.json
format.This format can include policies and sources (since those live on disk), but it does not include assertions (see 1.) nor any other configuration. Having used both, I prefer using
memfs
and JSON-based scenarios (there are tradeoffs, of course) overScenario
s. If there's one advantage I'd like to call out, it's that this is far easier to understand; we load files like usual and write AVA tests.Note
I am not suggesting we abandon running against a real filesystem using a real CLI--that has a lot of value!--but rather that testing the CLI is a separate concern from testing a
Scenario
.What we don't have is a good way to reuse these "JSON-based scenarios" in other packages. So my question is: does it make sense to build this out?
What I like about
Scenario
s is that their sources are easy to read and change--they are just functions. In the case of a "JSON-based scenario", you're working with code embedded in JSON. There are ways to make them easier to work with, of course--and you could even "materialize" a JSON based-scenario to the same one on disk, and execute it directly--just swap out yourfs
object. That said, it'sScenario
sources have very low churn, so it may make sense to de-prioritize this use-case.The text was updated successfully, but these errors were encountered: