Stephen Smith's Blog

Musings on Machine Learning…

Posts Tagged ‘performance

Performance and the Sage 300 Views Part 2

with 2 comments

Introduction

Last week we discussed avoiding table scans when using the Sage 300 ERP APIs. This week we are going to look at some other issues to do with updating data and with processing meta-data.

Last week I showed a cheetah running as an example of performance and speed (the fastest land animal), but this week here she is resting and getting some attention.

cheetah3

AOM/UI Info/ViewDoc

First, just if you are wondering where to find out what indexes a View supports, there are quite a few tools to determine this. Plus you can always look in SQL Management studio, but then you won’t know which index it is by our numbering scheme. Anyway ViewDoc is a good tool that comes with the SDK that gives this information. UI Info comes with System Manager and can drill down through the UI Info to get detailed View Info. Then there is the Sage 300 Application Object Model (AOM) located here. Just note that to use the AOM, you must use Internet Explorer for some obscure reason.

Updating Data

Often if you are manipulating lots of records it’s in a header/detail situation. In this case all the database operations are done when you insert or update the header. The nice things about this is that the Views know a lot about our database API and will do this in an optimal manner so you don’t need to worry about it. Similarly if you delete a header, the View will delete all attendant details for you in an efficient manner.

But suppose you want to update a bunch of records using our .Net API and want to know the most efficient way to do this. Say we want to add something to the end of every A/R Customer Name. Our easy brute force way to do this would be:

arCUS.RecordClear();
while (arCUS.Fetch(false))
{
arCUS.Fields.FieldByName(“NAMECUST”).SetValue(
arCUS.Fields.FieldByName(“NAMECUST”).Value + “A”, false);
arCUS.Update();
}

 

This works but you might find it a bit slow. We can speed it up quite a bit by bracketing the whole thing in a database transaction:

mDBLinkCmpRW.TransactionBegin();
arCUS.RecordClear();
while (arCUS.Fetch(true))
{
arCUS.Fields.FieldByName(“NAMECUST”).SetValue(
arCUS.Fields.FieldByName(“NAMECUST”).Value + “A”, false);
arCUS.Update();
}
mDBLinkCmpRW.TransactionCommit();

 

The times from the sample program (the same one as last week but with a bit added) is:

Time to update all customers: 00:00:00.087
Time to update all customers in a transaction: 00:00:00.038

So putting things in a database transaction helped. This is for Sample Data so there are only a few customers. The updated sample program is the PerformanceSamples project located here (both folder and zip file).

Database Transactions

Generally when using our API you don’t need to worry about database transactions, but occasionally like in the above example they are necessary. In the above example the first method has the side effect that each update is done in a separate transaction. That means you have the overhead of starting and committing a transaction with every record update. In the second example we start a transaction so all the records are committed as a single transaction. Strictly speaking the two examples don’t do the same things, if the first example throws an exception part way through then all the updates done up to that point will be in the database, whereas in the second example they will be discarded since the transaction will be rolled back. This difference can be quite important if there are database integrity issues to consider. Generally Sage 300 ERP uses transactions to go from one state where the database has full integrity to another. This way we can rely on database transactioning to always maintain full database integrity.

There is overhead to setting up and committing a transaction, but there are also resources used for every operation done inside a transaction. At some point the above example will start to slow down if you have too many A/R customers. Generally you might want to commit the transaction every thousand customers or so for optimal performance (but make sure you maintain database integrity along the way).

Also keep in mind that while records are updated in a transaction they will become locked from the point of update through to the end of the transaction, so updating a lot of records in a transaction will lock a lot of records and cause anyone else going to read that record to have to wait until your transaction completes. So try to keep transactions quick. Definitely don’t do any UI type operations in the middle of a transaction (like asking the user a question).

Revisioned Views

Revision List type views will store all insert/updates/deletes in memory until you call Post. Generally these are detail views and you don’t see this functionality because it’s handled by the header. But occasionally you may need to deal with one of these (like perhaps GLAFS). In this case since each Post is a transaction, you just need to be aware of how often you call it as this will have the same effect on performance as mentioned above.

Deleting

Although you can delete records as above just replacing the Update with a Delete call, there is a better way. The Views have a FilterDelete method where you pass in a browse filter and all the records that match will be deleted. This will prove to be quite a bit faster than the above.

Meta-Data

If you run RVSpy with all the View calls selected you will see a lot of meta-data calls, getting information on fields and such. Generally meta-data calls are quite fast and don’t involve going to the database. However if you really go crazy you can slow things down quite a bit. If you make everything dynamic then you could end up making lots of meta-data calls and cumulatively these slow you down a bit. Similarly using constants in things like getting fields are slightly faster than passing field names because you avoid a dictionary lookup (admittedly quite fast but not as fast as direct access). Mostly people exercise good judgement and don’t go too wild driving everything from meta-data, but we have seen some crazy cases.

Summary

Just a quick overview of some performance tips. Hopefully these all help to make your use of the Sage 300 API more efficient.

 

Performance and the Sage 300 Views Part 1

with 5 comments

Introduction

The Sage 300 ERP Views (Business Logic) give you a great deal of power to perform Accounting operations through our various APIs. However as in any programming, performance must always be taken into account. The Sage 300 ERP Views have a lot of features to help you perform operations with good performance, but like anything if they are used incorrectly, performance can be miserable.

This article is going to talk about various features and options that you can take advantage of to improve your application’s performance. As I am writing the article, it’s getting quite long, so I think I’m going to break it into two parts.

cheetah2

Measure and Test

One of the big mistakes people make when performance tuning, is to just make assumptions and changes without doing real measurements. If you have your code in a source control system, first establish a base line for how long something takes, then make you changes and re-measure the time. Only check in your changes if the time is faster, if it isn’t then you are just churning your code and potentially adding bugs. Performance is subtle and often the best ideas and intentions just make a process slower.

Multi-User versus Single-User Performance

This article is about optimizing processes for single users. Often if you want to optimize better multi-user throughput then it’s all about reducing locks and keeping resource usage down. Sometimes these goals align, i.e. 1 person doing something quicker translates to 100 people doing things quicker, sometime they are opposing, i.e. one person can do something way quicker if he takes over all available resources at the detriment to everyone else.

Read-Only versus Read-Write

You can open our database links and views either in read-write mode or read-only mode. Generally if you aren’t updating the data then you want to open in read-only mode as this makes things quite a bit faster. If you might update the data then we have to use more expensive SQL operations so that if you do update the data, the update is fast and multi-user considerations are handled. If you open a table or link read-only then we use much lighter weight SQL operations and the data is returned much quicker. Finders use this to display their data quicker.

FilterSelect/FilterFetch versus Browse/Fetch

When you Browse/Fetch you can always update or delete the record fetched. As mentioned above that can introduce extra overhead and slow things down. Making the table or link read-only will help Browse/Fetch, but perhaps a better method is to use the FilterSelect/FilterFetch methods which are better optimized for SQL Server than Browse/Fetch. The results from these can’t be updated or deleted but at the same time the access method is always light weight whether the link is open read-only or read-write.

Indexes

Sage 300 will always use an index to read data. We have a lot of code to optimize access based on available indexes. If you use the indexes provided your code will be much faster.

For example, suppose you want to know if there are any open G/L Batches. A quick bit of code to do this is:

glBCTL.Browse(“BATCHSTAT=1”, true);
bool isOpenBatch = glBCTL.GoTop();

This works pretty good on sample data, but then you go to a client, suddenly this becomes quite slow. The reason is that since BATCHSTAT isn’t part of the primary index, the GoTop basically goes looking through the Batch table until it reaches the end or finds an open batch. Since open batches are usually at the end, this tends to be sub-optimal. Practically you could speed this up by searching through the table backwards since then you would probably find one quicker, but if there are no open batches you still search the whole table. Fortunately there is a better way. The GLBCTL table has two indexes, one is its primary default index of BATCHID and the other secondary index is on BATCHSTAT and BATCHID (to make it an index without duplicates). So it makes sense to use this index:

glBCTL.Order = 1;
glBCTL.Browse(“BATCHSTAT=1”, true);
isOpenBatch = glBCTL.GoTop();

Simple adding the Order property makes this search much quicker. I included a sample program with timers and the full code. The results on sample data show the speed difference (not that it was all that slow to start with):

Time to determine if there are open batches: 00:00:00.034
Time to determine if there are open batches take 2: 00:00:00.007

The sample program is located here. Its PerformanceSamples one (folder and zip).

So generally you want to use an index that matches the fields that you are searching on as much as possible. Usually having clauses in your browse filter that uses the index segments from left to right will result in the fastest queries.

This example may look a little artificial, but once you get into the operational modules like O/E and P/O this becomes crucial. That is because the main tables like the Order Header have a uniquifier as the primary index. When you want to look something up it’s usually by something like order number and to do this efficiently you have to use an alternate index. So once you are using these modules you will be using alternate indexes a lot. In these modules also be careful that quite a few alternate indexes allow duplicates, so you might get back quite few records unexpectedly.

RVSpy/DBSpy

RVSpy and DBSpy are good tools for identifying bad behavior. The logs contain time information so you can see where the time is being used, but more often than not doing something bad for performance results in a series of operations appearing over and over in these logs. Usually scrolling to the middle of the output file is a good way to see something going awry. You can also use SQLTrace or ODBCTrace, but I find these slightly less useful.

When using RVSpy for this purpose, it helps to turn off logging to a Window (slow) and only log to a file (make sure you specify one). Further choose the View calls you want to log, usually disabling anything to do with meta-data and anything that is field level.

So if you see output like:

[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.58].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.58;t=0;ovh=0] {}
[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.58].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.59;t=0;ovh=0] {}
[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.59].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.59;t=0;ovh=0] {}
[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.59].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.60;t=0;ovh=0] {}
[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.60].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.60;t=0;ovh=0] {}

Going on for pages and pages then you have something wrong.

Avoid Table Scans

Most of this article is about avoiding table scans, but just to re-iterate table scans are bad. People are often fooled by testing on sample data. Many of the tables in sample data are quite small and it doesn’t really matter what you do. However in the real world with real customer databases things will usually be quite different. For instance sample data has 9 tax authorities, which you might think is reasonable. But in the USA where any municipal government agency can charge a sales tax, there are over 35,000 tax authorities. If you read all these (like to populate a combo-box to pick one from), then you will run very slowly and your customers will be unhappy.

Summary

Sage 300 ERP has many mechanisms to access and manipulate data efficiently. But as with anything in programming, if you use APIs without due care and attention then performance (and quality in general) will suffer.

Written by smist08

March 10, 2015 at 9:44 pm

Preparing for the Sage ERP Accpac 6.0A Launch

with 4 comments

Here in R&D we are working hard to release Sage ERP Accpac 6.0A. This is a really major release for us. It introduces our next generation of Web Based UIs. Whenever the major number in Accpac’s version changes it signals the introduction of a major new technology (1 – Windows, 2 – SQL Server, 3 – Crystal Reports, 4 – 32 Bit, 5 – VB ActiveX UIs). Version 6 is no different, introducing our Sage Web Toolkit (SWT) based zero-client web based (HTML/JavaScript) UIs and the introduction of SData REST based Web Services. I’ve already written quite a few articles on the new features and technologies in Version 6:

This posting is to look at what we are doing now that we are code complete. Many of you can see some of the activities as you participate in the Alpha, Beta or Controlled Release programs, but behind the scenes there are many things going on to prepare for release.

Code Complete

We reached code complete back on August 20, 2010. This was after completing 20 3-week agile scrum sprints (http://en.wikipedia.org/wiki/Scrum_(development)). Prior to the start of the scrum sprints, quite a bit of work had been done working out the framework and doing the underlying research. But these 20 sprints were the bulk of the work. This was our first project where we used agile scrum as our development methodology. As a result we had a few problems along the way, but we learned a lot and are all in agreement that the agile method is far superior to the old waterfall methodology we were using previously. The goal of agile is that each sprint is self-contained, in that any feature (or story) is completed entirely within one sprint including development, QA and documentation. Ideally the product is in a releasable state at the end of each sprint.

Regression

Once we reached code complete, we started a regression test phase. This is intended to be a thorough regression test of the entire product to ensure nothing is broken. Ideally if you are perfect at using the agile methodology, you shouldn’t need to do this. You just ship when you are code complete. But we aren’t perfect and we are still learning how to do agile better. In fact agile is a continuous improvement (or kaizen http://en.wikipedia.org/wiki/Kaizen) type of process that never sits still. You are always striving to improve. However Accpac is a large and complicated product with many interactions between all its parts. So we really do need to run a full regression to find and remove any funny bugs that were introduced unawares during the sprint process. We continue to do this testing until we release.

Security Audit

Version 6 will still be an on-premise application. For this version, Sage won’t be hosting it as a SaaS product. However we suspect that some customers will expose the Accpac Web Portal and other screens to the Internet at large. They won’t keep it safely hidden behind multiple layers of corporate firewall protection. Once Accpac is exposed to the Internet, security becomes a much larger concern. Application security is more than just switching to using HTTPS instead of HTTP. To this end we embarked on educating our programmers and QA on Internet security threats and added security related practices to our coding standards. Doing this is often called adding a SDL (Security Development Lifecycle http://www.microsoft.com/security/sdl/default.aspx). As part of this we had an external security firm do a complete security audit on all our new web based technologies. The external firm did find a number of important security vulnerabilities that were all fixed, and we learnt a lot to avoid these problems in the future. This way we received external confirmation that we were following good security practices and would provide a challenge to hackers trying to break in. This testing was started just before code complete and finished during regression.

Performance Testing

With any Web application performance is always an important priority. Over the past couple of releases we’ve been putting more and more performance testing into place to ensure the Accpac application performs well in all areas. This include the performance of a single user doing things, how long does it take for screens to appear, how long to perform processing tasks, etc. Then we have multi-user testing where we want to ensure that the single-user performance doesn’t degrade badly once many users are in the system doing the same thing. Over the web there are a couple of extra complexities such as how good is your network connection. We want Accpac to work well over any Internet broadband network, not just on a fast corporate LAN. Some of the testing we are doing is:

  • Using Selenium (http://seleniumhq.org/) to run a wide variety of user tests and record the performance on a weekly basis and compare to benchmarks set by Product Management. These are single user tests to make sure performance is good for a single user. We also use Selenium for automated functional testing.
  • Using JMeter (http://jakarta.apache.org/jmeter/) to run a wide variety of tests under multi-user load. JMeter doesn’t test what runs in the browser, it simulates what the browser does at the network level to test the server under heavy load. Again these tests are run weekly and tracked to watch the trends.
  • Perform weekly manual multi-user testing. Every week we have all of QA test against the same server and run “scripts” of typical tasks that real customers would do. This gives us confidence that we work well in a real customer environment and it validates that our automated tests are testing the right things.
  • Using Fiddler (http://www.fiddler2.com/fiddler2/) to record all HTTP requests made by Accpac and to ensure that these are kept to a bare minimum and that each one is responded to quickly. This is to ensure that we will work well over slower Internet connections with higher latency.
  • Continue to use VBA macros to ensure any changes haven’t had a detrimental effect on the business logic performance.

Quality Debt Reduction

With every release we want to improve the quality of Accpac. With this release we have cataloged all known defects at the start of the release and a release certification requirement is that the number of total defects must be lower at release. So from release to release the number of open defects is reducing. There is also a lot of work going on so we prioritize these defects so that the ones that affect customers the most (based on severity and frequency) will be fixed first. We want to ensure this isn’t a numbers game where we just fix lots of issues about under-lining or field widths in reports. The real aim to ensure we have real measurable metrics that are published in a dashboard that is visible to everyone in Sage up to the CEO, to ensure we are improving quality in a real and measurable way.

Alpha

To help our ISVs (http://en.wikipedia.org/wiki/Independent_software_vendor) come up to speed quickly, we’ve released “alpha” versions of the software for them to work with. We release these versions well ahead of code complete so ISVs have plenty of time to learn the new Web UI programming model and incorporate it into their products. We’ve been holding training classes on this; this year at South African Insights (Feb), North American Insights (May) and TPAC (August). We will be holding this training again in Australia and Asia. Having ISVs on board and developing using the new Web based technology is crucial to its success.

Beta

We have now had two Beta releases. These are to start to get business partners familiar with the new features; to test it and to provide feedback. Beta started just before code complete and allows some room to incorporate feedback (which we have) as well as provide enough time to fix bugs. This gives BPs a chance to run test conversions of client data to ensure it can be upgraded to the new version, plus to determine any work that will need to be done to customizations. Since ERP applications tend to be mission critical, we don’t want the beta installed in live environments, just test environments.

If any BPs can send us their client’s data, this is a great help for our real world testing. For the client and BP this ensures that we have used their data as a test case and they will know it will upgrade without problems. Of course the customer has to be OK with sharing their possibly very sensitive data. But if they do, we greatly appreciate it. Sage has a data retention and protection policy that spells out the steps we take to safeguard the data and how long we will keep it.

Controlled Release

Next is controlled release. This is when we take a release candidate of the software and install it in live environments. Due to the mission criticality of these environments, we ensure we have Support and Development on call to quickly address any issues. This is the final test to ensure everything works fine in a real customer environment. We will then help to move the customer to the final version (if necessary). The controlled release program is very important to us and we put a lot of weight in how it goes when deciding whether to release.

Release Certification

Now that all the testing is completed, all the alpha, beta and controlled release programs are finished, we need to decide if we should release. We now prepare a “release certification” report of the results and statistics from all the testing and all the feedback, bring all the stakeholders together and have a meeting to discuss the release. Beta and controlled release feedback is very prominent here. At this meeting we approve, reschedule or reject the release. This is more than just development, this meeting is to also ensure that all training material is in place, documentation is complete, language translations are done, marketing material is ready, the Sage Web site is ready, etc. Once we have the go ahead from this meeting, the release will proceed and you will be able to download the product.

Summary

 This was a quick summary of some of the tasks that are being performed prior to the release of Sage ERP Accpac 6.0A. This is a huge project that involves many people. All of whom are working hard to make this a successful release and to launch the next generation of Accpac technology.

Written by smist08

September 25, 2010 at 4:08 pm