You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since robot, unlike many other test frameworks, does not print a stack trace on a failure, it often takes me and my colleagues longer than necessary to locate the exact line at which a failure inside a robot test case occurs.
The problem with this is that if a test suite's author forgets to add unique failure messages to all checks inside of it, there is no indication where exactly a test failure occurs. This happens quite a lot in practice where I work considering that - through the use and re-use of custom keywords - you can write readable test cases without using failure messages at all. This saves quite a bit effort, but also makes it impossible to locate a failure since no stack trace is printed. And coming from programming languages with error stack traces most of my colleagues still do this, being oblivious to the fact that they are writing tests whose failure logs are completely useless, which is especially frustrating when we're dealing with robot test failures that only happen on a CI/CD pipeline and are impossible to reproduce or debug locally.
This would not be a problem if robot would just log a stack trace with every failure similar to how python prints a stack trace together with every error raised, which is why I advocate for this feature to be added to the robot framework. It really would save a lot of time locating failures in longer test cases, saving us the trouble to either re-run every failed test case with a debugger attached or to add unique failure messages to every single check in every single test case in every single robot test suite in every single project.
The text was updated successfully, but these errors were encountered:
Robot logs the traceback on the debug level and can be seen when running tests with --loglevel debug. If you don't want to see it by default but nevertheless want it to be collected during execution, you can use --loglevel debug:info to set the default level that's used in the log file to info.
The reason the traceback isn't logged on the info level and thus isn't visible by default is that it looks confusing to non-technical users and that it's typically only needed when there are bugs in libraries. Well behaving libraries ought to provide enough context information also otherwise.
Since robot, unlike many other test frameworks, does not print a stack trace on a failure, it often takes me and my colleagues longer than necessary to locate the exact line at which a failure inside a robot test case occurs.
The problem with this is that if a test suite's author forgets to add unique failure messages to all checks inside of it, there is no indication where exactly a test failure occurs. This happens quite a lot in practice where I work considering that - through the use and re-use of custom keywords - you can write readable test cases without using failure messages at all. This saves quite a bit effort, but also makes it impossible to locate a failure since no stack trace is printed. And coming from programming languages with error stack traces most of my colleagues still do this, being oblivious to the fact that they are writing tests whose failure logs are completely useless, which is especially frustrating when we're dealing with robot test failures that only happen on a CI/CD pipeline and are impossible to reproduce or debug locally.
This would not be a problem if robot would just log a stack trace with every failure similar to how python prints a stack trace together with every error raised, which is why I advocate for this feature to be added to the robot framework. It really would save a lot of time locating failures in longer test cases, saving us the trouble to either re-run every failed test case with a debugger attached or to add unique failure messages to every single check in every single test case in every single robot test suite in every single project.
The text was updated successfully, but these errors were encountered: