Moodle uses the Behat testing framework, which does automated functional testing (that's the type of automatic testing where it clicks on things in a browser window, in exactly the same way as if a user was doing it).
As a practical tool, this works great - yes, it's hard to install, only really works properly using Firefox at the moment, and is very slow, but these things are probably unavoidable.
I have two issues with it, so I'm going to write a rant.
1. Writing complex tests is very slow because there's no interactive mode.
Behat tests are written as a sequence of sentences like:
And I click on the 'span.someclass' 'css_element'
Each sentence is easy to write but there's lots of room for error. In this case, what if there are two elements on the page with that class? What if I got the CSS class wrong? What if I'm not actually on the right page that has that element yet, and I need to click another link first?
The answer to those 'what ifs' is that you have to figure out why it didn't work, change it, then run the whole test again. That's a familiar cycle for software developers, but in this case it's unfortunate because bearing in mind that Behat tests always begin from the home page of an empty Moodle site, there are usually a good few steps before you even get to the main part of your test (log in, go to the course page, create an activity, etc). As a result you probably have to wait at least thirty seconds before you can see if your change works - and if the problem is near the end of a complex test, it could be several minutes.
For anyone reading this who isn't a software developer: thirty seconds is a Long Time. You know when you just missed the bus and have to wait 25 minutes in the rain for the next one? That's about how long thirty seconds is to a software developer.
When writing a Behat test, what you're doing is taking an interactive activity (clicking on things in a web browser) and turning it into a script. That's fine, but it would be much more efficient if it were possible to actually do this interactively.
In other words, I could type my example sentence above into a Behat console; if it didn't work, rather than 'test failed, now you have to run it again from the start, loser' I would simply get an immediate 'nope' and be able to try a different variant of the command. And, with the web browser (that Behat uses for testing) conveniently open already at the right place, I could even use the web developer tools to check on the current page structure to see why it wasn't working.
Once you've got a test working, the current process is fine, but for actually developing it, an interactive console would be a huge advance. There's an issue about this here and a demo where somebody has made it here (link in demo doesn't work so I don't know how sketchy that is) but it doesn't seem to be a solved problem.
2. Somebody got all hung up on testing theory
There are theories about how you do this type of testing. It's strictly from the user's point of view. You don't write a test unless you can define what the benefit of that feature is for the user. Each scenario contains a single independent test. Yadda, yadda, yadda.
All of this is just so much meaningless junk and distracts from the really important thing which, in practice, Behat testing actually does. What it actually does is, in a repeatable and fully automated manner, carry out a sequence of steps and check that they have the expected result. So it's a shame it gets hidden behind Fully Agile Runtime Testing methodology, or whatever it's called, with everyone having to make up stupid prefixes at the top of the file like:
In order to take over the world
As a 37-year old vegetarian who hates dogs
I need to make sure the damn availability conditions page in Moodle actually works, okay?
That example could be included in a real test script in Moodle - I might have some effort getting it past code review, but other than that, it would work exactly as well as the real text in that test, which is to say, it would have no practical effect at all and be of no use even in documenting the test.
This is a waste of time. We already have a user interface feature and at this point, nobody should care why or who it's for. We want to know if it works or not. Maybe it needs testing from the perspective of different roles, maybe it doesn't; that's a detail that can be handled in the test. Somebody should have already decided it's useful (or if not it can be deleted) - that's a completely independent question from testing it. Do you need to pretend that your testing system is a part of your 'agile' requirements list? Nope. Completely independent things, much better kept separate.
You know what would be better? A documentation block at the top of a test that describes what is covered by the test. (Unless that's already obvious from the name.) Sort of like you have in other programming languages.
Just to hammer it home, imagine doing this in another programming language (and yes Behat scripts are a programming language), like PHP. Here's an example:
* In order to have users collaborate with each other
* As a teacher
* I need to let Moodle APIs know the current version of the forum module.
(That is, in 'agile' theory, the comment that should go at the top of mod/forum/version.php.)
The same type of issue applies to each sentence line, which starts with a word that's ignored, something like:
Given I follow "C1"
When I edit the section "1"
Then "Restrict access" "fieldset" should not exist
That first word (Given/When/Then) doesn't do anything, it's just part of the 'how you are supposed to express a test for no good reason because it fits somebody's theory' junk. And to make matters worse, Moodle have an actual coding standard requirement that you can't use it sensibly. Your scenario has to have exactly one 'Given', 'When', and 'Then' (you can have as many 'And' lines between them as you like). Back in reality we obviously want to check multiple things in a single test scenario because otherwise it would take a million years to do a Behat run instead of the mere 48,000 it takes at present, so this means we start every line with 'And'.
You know what would be better than these meaningless words? Bullet points.
- I follow "C1"
- I edit the section "1"
- "Restrict access" "fieldset" should not exist
Not sure that's allowed in Behat syntax, but it would be a nice improvement. Another nice improvement would be, you know, leaving out entirely the word that doesn't do anything.
Right, rant over. Behat does work pretty well, but the first issue makes it rather more painful to create tests than it should be, and the second just makes it annoying (at least if you're me).