Stephen Smith's Blog

Musings on Machine Learning…

Posts Tagged ‘Peformance

Sage ERP Accpac 6 Performance Testing

with 4 comments

As Accpac moves to becoming a Web based application, we want to ensure that performance for the end user is at least as good as it was in the Accpac 5.x days running client/server with VB User Interface programs. Specifically can you have as many users running Accpac off one Web Server as you currently have running off one Citrix Server today?  In some regards Accpac 6 has an advantage over Citrix, in that the User Interface programs do not run on the server. On Citrix all the VB User Interface programs run on the Citrix server and consume a lot of memory (the VB runtime isn’t lightweight). In Accpac 6, we are only running the Business Logic (Views) on the Server and the UIs run as JavaScript programs on the client’s Browser. So we should be able to handle more users than Citrix, but we have to set the acceptance criteria somewhere. Performance is one of those things that you can never have enough of, and tends to be a journey that you keep working on to extend the reach of your product.

Web based applications can be quite complex and good tools are required to simulate heavy user loads, automate functional testing and diagnose/troubleshoot problems once they are discovered. We have to ensure that Accpac will keep running reliably under heavy user loads as users are performing quite a diverse set of activities. With Accpac 6 there are quite a few new components like the dashboard data servlets that need to have good performance and not adversely affect people performing other tasks. We want to ensure that sales people entering sales orders inside CRM are as productive as possible.

We are fundamentally using SData for all our Web Service communications. This involved a lot of building and parsing XML. How efficient is this? Fortunately all web based development systems are highly optimized for performing these functions. So far SData has been a general performance boost and the overhead of the XML has been surprisingly light.

Towards ensuring this goal we employ a number of open source or free testing tools. Some of these are used repeatedly run automated tests to ensure our performance goals are being met. Some are used to diagnose problems when they are discovered. Here is a quick list of some of the tools we are using and to what end.

Selenium (http://seleniumhq.org/) is a User Interface scripting/testing engine for performing automated testing of Web Based applications. It drives the Browser as an end user would to allow automated testing. We run a battery of tests using Selenium and as well as looking for functionality breakages, we record the times all the tests take to run to watch for performance changes.

JMeter (http://jakarta.apache.org/jmeter/) is a load testing tool for Web Based applications. It basically simulates a large number of Browsers sending HTTP requests to a server. Since SData just uses HTTP, we can use JMeter to test our SData services. This is a very effective test tool for ensuring we run under heavy multiuser loads. You just type in the number of users you want JMeter to simulate and let it go.

Fiddler2 (http://www.fiddler2.com/fiddler2/) is a Web Debugging Proxy that records all the HTTP traffic between your computer and the Internet. We can use this to record the number of network calls we make and the time each one takes. One cool thing Fiddler does is record your network performance and then calculates the time it would have taken for users in other locations or other bandwidths would have taken. So we get the time for our network, but also estimates of the time someone using DSL in California or a modem in China would have taken to load our web page.

dynaTrace AJAX Edition (http://ajax.dynatrace.com/pages/) is a great tool that monitors where all the time is going in the Browser. It tells you how much time was spent waiting for the network, executing JavaScript, processing CSS, parsing JavaScript, rendering HTML pages. A good tool to tell you where to start diving in if things are slow.

Firebug (http://getfirebug.com/) is a great general purpose profiling, measuring, debugging tool for the Firefox browser. Although our main target for release if Internet Explorer, Firebug is such a useful tool, that we find we are often drawn back to doing our testing in Firefox for the great tools like Firebug that are available as add-ins.

IE Developer Tools (http://msdn.microsoft.com/en-us/library/dd565628(VS.85).aspx) are built into IE 8 and are a very useful collection of tools for analyzing things inside IE. The JavaScript profiler is quite useful for seeing why things are slow in IE. It is necessary to use this tool, since often things that work fine in Firefox or Chrome are very slow in IE and hence you need to use an IE based tool to diagnose the problem.

Using these tools we’ve been able to successfully stress the Accpac system and to find and fix many bugs that have caused the system to crash, lockup or drastically slow down. Our automation group runs performance tests on our nightly builds and posts the results to an internal web site for all our developers to track. As we develop out all the accounting applications in the new SWT (Sage Web Toolkit), we will aggressively be developing new automated tests to ensure performance is acceptable and look to keep expanding the performance of Accpac.

Advertisements

Written by smist08

February 26, 2010 at 9:33 pm

SQL Server, Performance and Clustered Indexes

with 2 comments

One of our engineers Ben Lu, was doing some analysis of why SQL Server was taking along time to find some dashboard type totals for statistical display. There was an index on the data, and their weren’t that many records that met the search criteria that needed adding up. It turned out that even though it was following an index to add these up, that the records were fairly evenly distributed through the whole table. This meant that to read the 1% or so of records that were required for the sum, SQL Server was actually having to read pretty much every data table in the table usually to get one record. This problem is a bit unique to getting total statistics or sums; if the data was fed to a report writer, the index would return enough records quickly to start the report going and then SQL Server can feed the records faster than a printer or report preview windows can handle. However in this case nothing displays till all the records are processed.

It turns out that SQL Server physically will store records in the order on disk or the index that is specified as a clustered index (and only 1 is required). In Accpac we designate the primary index as a clustered index as we found long ago that this speeds up most processing (since most processing is based off the primary index). However in this case the primary index was customer number, document number; and for the query were interested in processing all unpaid documents which are spread fairly evenly over the customers (at least in the customer database we were testing with).

We found that we could speed up this calculation quite a bit by added an additional segment to the beginning of the primary key. For instance without any changes, the query was taking 24 seconds on a large customer database. Putting the document date as the first segment reduced the query to 17 seconds (since hopefully unpaid invoices are near the end). However it turned out there were quite a few old unpaind invoices. Adding an “archive” segment to the beginning where we mark older paid invoices as archived reduced the query time to 4 seconds. We could also add the unpaid flag as the first index to get the same speed up. But were are thinking that perhaps the concept of a virtual archiving would have other applications other than this one query.

Anyway now that we understand what is going on, we can decide how to best address the problem. Whether adding the paid flag to the record, adding the date or adding an archive flag. Its interesting to see how SQL Server’s performance characteristics actually affect the schema design of your database, if you want to get best perforamance. And that often these choices go beyond just adding another index.

Written by smist08

June 7, 2009 at 9:05 pm