Stephen Smith's Blog

Musings on Machine Learning…

Posts Tagged ‘hudson

Automated Testing in Sage ERP Accpac Development

with 7 comments

All modern products rely heavily on automated testing to maintain quality during development. Accpac follows an Agile development methodology where development happens in short three week sprints where at the end of every sprint you want the product at a shippable level of quality. This doesn’t mean you would ship, that would depend on Product Management which determines the level of new features required, but it means that quality and/or bugs wouldn’t be a factor. When performing these short development sprints, the development team wants to know about problems as quickly as possible so any problems can be resolved quickly and a backlog of bugs doesn’t accumulate.

The goal is that as soon as a developer checks in changes, these are immediately built into the full product on a build server and a sequence of automated tests are run on that build to catch any introduced problems. There are other longer running automated tests that are run less frequently to also catch problems.

To perform the continuous builds and to run much of our automated tests we use “Hudson” (http://wiki.hudson-ci.org/display/HUDSON/Meet+Hudson). Hudson is an extremely powerful and configurable system to continuously build the complete product. It knows all the project dependencies and knows what to build when things change.  Hudson builds on many other tools like Ant (similar to make or nmake) and Ivy among others to get things done (http://java-source.net/open-source/build-systems). The key thing is that it builds the complete Accpac system, not just a single module. Hudson also has the ability to track metrics so we get a graph of how the tests are running, performance metrics, lines of code metrics, etc. And this is updated on every build. Invaluable information for us to review.

The first level of automated testing are the “unit tests” (http://en.wikipedia.org/wiki/Unit_testing). These are tests that are included with the source code. They are short tests where each test should run in under 1 second. They are also self contained and don’t rely on the rest of the system being present. If other components are required, they are “mocked” with tools like easy-mock (http://easymock.org/). Mocking is a process of simulating the presence of other system components. One nice thing about “mocking” is that it makes it easy to introduce error conditions, since the mocking component can easily just return error codes. The unit tests are run as part of building each and every module. They provide a good level of confidence that a programmer hasn’t completely broken a module with the changes they are checking in to source control.

The next level of automated tests are longer running. When the Quality Assurance (QA) department first tests a new module, they write a series of test plans, the automation team takes these and transfers as many of these as possible to automated test scripts. We use Selenium (http://en.wikipedia.org/wiki/Selenium_(software)), which is a powerful scripting engine that drivers an Internet Browser simulating actual users. These tests are run over night to ensure everything is fine. We have a subset of these call the BVT (Build Validation Test) that runs against every build as a further smoke test to ensure things are ok.

All the tests so far are functional. They test whether the program is functioning properly. But further testing is required to ensure performance, scalability, reliability and multi-user are fine. We record the time taken for all the previous tests, so sometime they can detect performance problems, but they aren’t the main line of defense. We use the tool JMeter (http://jakarta.apache.org/jmeter/) to test multi-user scalability. JMeter is a powerful tool that simulates any number of client’s accessing a server. This tool tests the server by generating SData HTTP requests from a number of workstations and bombarding the server with them. A very powerful tool. For straight performance we use VBA macros that access the Accpac Business Logic Layer to write large numbers of transaction or very large transactions, which are all timed to make sure performance is fine.

We still rely heavily on manual QA testers to devise new tests and to find unique ways to break the software, but once they develop the tests we look to automate them so they can be performed over and over without boring a QA tester to death. Automated testing is a vibrant area of computer science where new techniques are always being devised. For instance we are looking at incorporating “fuzz testing” (http://en.wikipedia.org/wiki/Fuzz_testing) into our automated test suites. This technique if often associated with security testing, but it is proving quite useful for generalized testing as well. Basically fuzz testing takes the more standard automated test suites and adds variation, either by knowing how to vary input to cause problems, or just running for a long time trying all possible combinations.

As we incorporate all these testing strategies into our SDLC (Software Development Life Cycle) we hope to make Accpac more reliable and to make each release more trouble free than the previous.

Written by smist08

April 5, 2010 at 5:24 pm