by Evan Giles
Catalyst manage a lot of Enterprise web applications, enough that we are always seem to be upgrading one instance or another. More and more in the world of cloud native application stacks, an application upgrade may also come with system architecture changes and improvements. This could be anything from updating the underlying Operating System to a deploying into a new container-based platform.
There is always a cost and risk to change. Sometimes high, sometimes insignificant. And our years of experience in the managed services game means we have developed considerable process and workflow to managing change.
In the case of application development, this means we include steps like unit tests in our application code, as well as getting our Quality Assurance team to test the application in a non-production environment.
In the case of some of our larger Enterprise Moodle LMS instances, our clients have very high performance requirements and a persistent and impatient student body. Some of our larger sites have thousands of concurrent user sessions and the LMS is expected to maintain a sub-second page build time.
Without a realistic automated load testing strategy Catalyst would not be able to confidently roll out major changes to our Enterprise Moodle customers without risking a degredation in performance. Having the ability to run rounds of load testing also enables us to experiment with new toolsets and architectures, and validate if they yield better performance outcomes.
We have used a number of tools, but more and more we have settled on Jmeter - an open source pure Java application.
And we recently invested some time in a round of internal development to make it easier for us to launch a round of load testing on our Moodle sites, gather the results and meaningfully compare these results to previous rounds of testing. One clear discovery was that there was too much complexity and manual configuration required in the setup process.
We assessed a number of different potential approaches including:
- Using the Moodle JMeter integration which generates some test plans automatically.
- Using the JMeter recording facility, allowing us to “play back” some real user journeys.
- Tidying up and reusing what we already had.
In the end we reviewed all of the above ideas, and came up with a totally new testing plan. The defining moment came when we realized that a typical JMeter test plan in isolation would not be reusable across our multiple clients, and that we needed think of this as an independent project, which would be stored in a Git repository of its own. Built for the purpose of applying to arbitrary Moodle load testing requirements.
So that's what we have now, a repository which contains:
- A JMeter test plan (an XML file which tells Jmeter exactly what to do)
- A list of testing (fake) users that will be provisioned into Moodle and then used to trigger the load testing.
- A single Moodle course backup. Moodle has a binary archive format of an entire course that allows easy import and export.
- Instructions on how to turn these assets into an actual performance test for a given Moodle site.
This means that we can now take an arbitrary Moodle site, and simulate a flock of busy users taking complicated activity journeys around that application. Using this, we can give any Moodle site a good workout, tracking performance metrics along the way. And because we can easily repeat this process, we can track changes in performance metrics after application changes.
Now it's much easier for us to do meaningful A/B testing, answering questions like:
- Will doubling the application server's RAM improve performance?
- When is the best to trigger cloud auto-scaling?
- Is this site faster with Apache or Nginx?
- What different does it make if we … ?
And best of all, because we are storing the entire toolset in a code repository, every time we user this tool and go through this exercise, we can improve the selection of user journeys taken through the site, improving the usefulness of this tool moving forward.