Feature #10237: Refactor and clean up the automated test suite
Code cleverness vs gherkin explicitness in our feature files
This is best explained by an example. From
features/localization.feature (as of 1.7-rc1):
Scenario: The Report an Error launcher will open the support documentation in supported non-English locales Given I have started Tails from DVD without network and stopped at Tails Greeter's login screen And the network is plugged And I log in to a new session in German And Tails seems to have booted normally And Tor is ready When I double-click the Report an Error launcher on the desktop Then the support documentation page opens in Tor Browser
Here, alan rightly pointed out (when reviewing the "big diff" for Tails 1.7~rc1): "What tells [us that] Tails is not in english here?"
Indeed, we're a bit too clever here: we'll set
@language = "German" in the '
I log in to a new session in German' step, so that '
the support documentation page opens in Tor Browser' step will implicily (i.e. in a hidden manner) look for the German version of the picture identifying the support docs. We should probably be more explicit. Reading the scenario doesn't communicate this expected outcome at all, so a human reading it would have no expectation about the language in the docs that opens in the browser, just that some docs appear.
In this specific case we could just add an optional part to the step so that it can be expressed as '
the German support documentation page opens in Tor Browser', and if given we
@lanaguge == "German". But, really, we could stop using
@language in that step completely -- the uses of
@language in other steps still makes sense, though, since they're used in situations where the language part isn't the important thing, we're testing. It could be that we at some point want to use the '
the support documentation page opens in Tor Browser' step when German is used and we don't really care about the language then, and that is a case for adding the optional part + assert, I guess.
In general, we should stop being too clever, and really make sure that the scenarios explicitly express all the important parts. Reading the gherkin of a scenario should be all that is needed for a human to replicate the essential parts that our automated test suite would do.
Do we agree on this?