Cross-platform automated testing

Automated testing plays an important role in Code's on-going quality assurance strategy; together with other test approaches and techniques we use, it ultimately helps us deliver great quality work for our clients.

What is cross-platform automated testing?

Basically, its software that allows you to test how other software is functioning.

In our case, we write tests that are able to directly imitate the user's journey, driving the control of web browsers or mobile devices themselves to automatically check that the application is working properly. Once the series of tests for a particular project is in place, these can be re-used to test the software each time any changes or tweaks are made to the code.

This is a much more efficient alternative to having to repeatedly check everything manually; there's less room for error, and it allows you to get feedback on the current functional state of an application in a matter of minutes.

Why is it important?

A solid test automation strategy has always been a critical part of the software development process. But, with most applications now spanning a number of different platforms, it's never been more important.

Whether you're testing a large-scale web app across various different desktop and mobile browsers, or native iOS and Android apps on lots of different devices, a full suite of automated tests will quickly flag any back-end issues or bugs so that they can be fixed ASAP.

How do we do it?

As part of Code's agile development process, we prefer to apply a behaviour-driven approach --that is, one that reflects the behaviour of the end user -- to automated testing, and use proven tools like Cucumber to support this.

Our tests are run automatically whenever deployments are made, providing us with immediate feedback, including full HTML reports with embedded screenshots and video, so we can quickly determine whether everything's working as intended.

Crucially, the features and scenarios being tested and any resulting requirements are captured as plain English -- rather than in code -- so that the process and results are easy for anyone (and not just us tech heads) to track.

With full test coverage against every browser and device combination imaginable, once we've successfully run our tests, we can be 100% confident that the new software is going to work perfectly, however and wherever a user chooses to access it.

Why do we do it like this?

Applying a behaviour-driven approach allows us to focus on the user story/journey, working collaboratively to specify the requirements and drive the testing. All the different frameworks and tools necessary to run automated tests across the different platforms that Code supports are accessible through the same common language.

The plain text features/scenarios that get fed in at the start are the same as the rich reports that we generate at the end, keeping the whole test process consistent and transparent

regardless of whether we're testing an iPhone app, a responsive website or a user journey that spans both.

Everybody in the team - including the client - can see what's being tested and where any potential issues are (rather than the process just being lost in a load of test code somewhere); this is critical in maintaining Code's truly cross-platform agile test process.

This video shows automated testing in action as we check the 'signing up to job alerts' functionality on the Code website:

Technology specifics

We choose the appropriate tools for the job depending on the project, but here are some of our favourites:


Developer resources