Stephen Smith's Blog

All things Sage ERP…

Posts Tagged ‘sage 300 erp

Performance and the Sage 300 Views Part 2

with 2 comments


Last week we discussed avoiding table scans when using the Sage 300 ERP APIs. This week we are going to look at some other issues to do with updating data and with processing meta-data.

Last week I showed a cheetah running as an example of performance and speed (the fastest land animal), but this week here she is resting and getting some attention.


AOM/UI Info/ViewDoc

First, just if you are wondering where to find out what indexes a View supports, there are quite a few tools to determine this. Plus you can always look in SQL Management studio, but then you won’t know which index it is by our numbering scheme. Anyway ViewDoc is a good tool that comes with the SDK that gives this information. UI Info comes with System Manager and can drill down through the UI Info to get detailed View Info. Then there is the Sage 300 Application Object Model (AOM) located here. Just note that to use the AOM, you must use Internet Explorer for some obscure reason.

Updating Data

Often if you are manipulating lots of records it’s in a header/detail situation. In this case all the database operations are done when you insert or update the header. The nice things about this is that the Views know a lot about our database API and will do this in an optimal manner so you don’t need to worry about it. Similarly if you delete a header, the View will delete all attendant details for you in an efficient manner.

But suppose you want to update a bunch of records using our .Net API and want to know the most efficient way to do this. Say we want to add something to the end of every A/R Customer Name. Our easy brute force way to do this would be:

while (arCUS.Fetch(false))
arCUS.Fields.FieldByName(“NAMECUST”).Value + “A”, false);


This works but you might find it a bit slow. We can speed it up quite a bit by bracketing the whole thing in a database transaction:

while (arCUS.Fetch(true))
arCUS.Fields.FieldByName(“NAMECUST”).Value + “A”, false);


The times from the sample program (the same one as last week but with a bit added) is:

Time to update all customers: 00:00:00.087
Time to update all customers in a transaction: 00:00:00.038

So putting things in a database transaction helped. This is for Sample Data so there are only a few customers. The updated sample program is the PerformanceSamples project located here (both folder and zip file).

Database Transactions

Generally when using our API you don’t need to worry about database transactions, but occasionally like in the above example they are necessary. In the above example the first method has the side effect that each update is done in a separate transaction. That means you have the overhead of starting and committing a transaction with every record update. In the second example we start a transaction so all the records are committed as a single transaction. Strictly speaking the two examples don’t do the same things, if the first example throws an exception part way through then all the updates done up to that point will be in the database, whereas in the second example they will be discarded since the transaction will be rolled back. This difference can be quite important if there are database integrity issues to consider. Generally Sage 300 ERP uses transactions to go from one state where the database has full integrity to another. This way we can rely on database transactioning to always maintain full database integrity.

There is overhead to setting up and committing a transaction, but there are also resources used for every operation done inside a transaction. At some point the above example will start to slow down if you have too many A/R customers. Generally you might want to commit the transaction every thousand customers or so for optimal performance (but make sure you maintain database integrity along the way).

Also keep in mind that while records are updated in a transaction they will become locked from the point of update through to the end of the transaction, so updating a lot of records in a transaction will lock a lot of records and cause anyone else going to read that record to have to wait until your transaction completes. So try to keep transactions quick. Definitely don’t do any UI type operations in the middle of a transaction (like asking the user a question).

Revisioned Views

Revision List type views will store all insert/updates/deletes in memory until you call Post. Generally these are detail views and you don’t see this functionality because it’s handled by the header. But occasionally you may need to deal with one of these (like perhaps GLAFS). In this case since each Post is a transaction, you just need to be aware of how often you call it as this will have the same effect on performance as mentioned above.


Although you can delete records as above just replacing the Update with a Delete call, there is a better way. The Views have a FilterDelete method where you pass in a browse filter and all the records that match will be deleted. This will prove to be quite a bit faster than the above.


If you run RVSpy with all the View calls selected you will see a lot of meta-data calls, getting information on fields and such. Generally meta-data calls are quite fast and don’t involve going to the database. However if you really go crazy you can slow things down quite a bit. If you make everything dynamic then you could end up making lots of meta-data calls and cumulatively these slow you down a bit. Similarly using constants in things like getting fields are slightly faster than passing field names because you avoid a dictionary lookup (admittedly quite fast but not as fast as direct access). Mostly people exercise good judgement and don’t go too wild driving everything from meta-data, but we have seen some crazy cases.


Just a quick overview of some performance tips. Hopefully these all help to make your use of the Sage 300 API more efficient.


Performance and the Sage 300 Views Part 1

with 3 comments


The Sage 300 ERP Views (Business Logic) give you a great deal of power to perform Accounting operations through our various APIs. However as in any programming, performance must always be taken into account. The Sage 300 ERP Views have a lot of features to help you perform operations with good performance, but like anything if they are used incorrectly, performance can be miserable.

This article is going to talk about various features and options that you can take advantage of to improve your application’s performance. As I am writing the article, it’s getting quite long, so I think I’m going to break it into two parts.


Measure and Test

One of the big mistakes people make when performance tuning, is to just make assumptions and changes without doing real measurements. If you have your code in a source control system, first establish a base line for how long something takes, then make you changes and re-measure the time. Only check in your changes if the time is faster, if it isn’t then you are just churning your code and potentially adding bugs. Performance is subtle and often the best ideas and intentions just make a process slower.

Multi-User versus Single-User Performance

This article is about optimizing processes for single users. Often if you want to optimize better multi-user throughput then it’s all about reducing locks and keeping resource usage down. Sometimes these goals align, i.e. 1 person doing something quicker translates to 100 people doing things quicker, sometime they are opposing, i.e. one person can do something way quicker if he takes over all available resources at the detriment to everyone else.

Read-Only versus Read-Write

You can open our database links and views either in read-write mode or read-only mode. Generally if you aren’t updating the data then you want to open in read-only mode as this makes things quite a bit faster. If you might update the data then we have to use more expensive SQL operations so that if you do update the data, the update is fast and multi-user considerations are handled. If you open a table or link read-only then we use much lighter weight SQL operations and the data is returned much quicker. Finders use this to display their data quicker.

FilterSelect/FilterFetch versus Browse/Fetch

When you Browse/Fetch you can always update or delete the record fetched. As mentioned above that can introduce extra overhead and slow things down. Making the table or link read-only will help Browse/Fetch, but perhaps a better method is to use the FilterSelect/FilterFetch methods which are better optimized for SQL Server than Browse/Fetch. The results from these can’t be updated or deleted but at the same time the access method is always light weight whether the link is open read-only or read-write.


Sage 300 will always use an index to read data. We have a lot of code to optimize access based on available indexes. If you use the indexes provided your code will be much faster.

For example, suppose you want to know if there are any open G/L Batches. A quick bit of code to do this is:

glBCTL.Browse(“BATCHSTAT=1″, true);
bool isOpenBatch = glBCTL.GoTop();

This works pretty good on sample data, but then you go to a client, suddenly this becomes quite slow. The reason is that since BATCHSTAT isn’t part of the primary index, the GoTop basically goes looking through the Batch table until it reaches the end or finds an open batch. Since open batches are usually at the end, this tends to be sub-optimal. Practically you could speed this up by searching through the table backwards since then you would probably find one quicker, but if there are no open batches you still search the whole table. Fortunately there is a better way. The GLBCTL table has two indexes, one is its primary default index of BATCHID and the other secondary index is on BATCHSTAT and BATCHID (to make it an index without duplicates). So it makes sense to use this index:

glBCTL.Order = 1;
glBCTL.Browse(“BATCHSTAT=1″, true);
isOpenBatch = glBCTL.GoTop();

Simple adding the Order property makes this search much quicker. I included a sample program with timers and the full code. The results on sample data show the speed difference (not that it was all that slow to start with):

Time to determine if there are open batches: 00:00:00.034
Time to determine if there are open batches take 2: 00:00:00.007

The sample program is located here. Its PerformanceSamples one (folder and zip).

So generally you want to use an index that matches the fields that you are searching on as much as possible. Usually having clauses in your browse filter that uses the index segments from left to right will result in the fastest queries.

This example may look a little artificial, but once you get into the operational modules like O/E and P/O this becomes crucial. That is because the main tables like the Order Header have a uniquifier as the primary index. When you want to look something up it’s usually by something like order number and to do this efficiently you have to use an alternate index. So once you are using these modules you will be using alternate indexes a lot. In these modules also be careful that quite a few alternate indexes allow duplicates, so you might get back quite few records unexpectedly.


RVSpy and DBSpy are good tools for identifying bad behavior. The logs contain time information so you can see where the time is being used, but more often than not doing something bad for performance results in a series of operations appearing over and over in these logs. Usually scrolling to the middle of the output file is a good way to see something going awry. You can also use SQLTrace or ODBCTrace, but I find these slightly less useful.

When using RVSpy for this purpose, it helps to turn off logging to a Window (slow) and only log to a file (make sure you specify one). Further choose the View calls you want to log, usually disabling anything to do with meta-data and anything that is field level.

So if you see output like:

[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.58].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.58;t=0;ovh=0] {}
[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.58].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.59;t=0;ovh=0] {}
[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.59].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.59;t=0;ovh=0] {}
[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.59].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.60;t=0;ovh=0] {}
[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.60].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.60;t=0;ovh=0] {}

Going on for pages and pages then you have something wrong.

Avoid Table Scans

Most of this article is about avoiding table scans, but just to re-iterate table scans are bad. People are often fooled by testing on sample data. Many of the tables in sample data are quite small and it doesn’t really matter what you do. However in the real world with real customer databases things will usually be quite different. For instance sample data has 9 tax authorities, which you might think is reasonable. But in the USA where any municipal government agency can charge a sales tax, there are over 35,000 tax authorities. If you read all these (like to populate a combo-box to pick one from), then you will run very slowly and your customers will be unhappy.


Sage 300 ERP has many mechanisms to access and manipulate data efficiently. But as with anything in programming, if you use APIs without due care and attention then performance (and quality in general) will suffer.

Written by smist08

March 10, 2015 at 9:44 pm

Drilldown in Sage 300 ERP

with 2 comments


Much accounting detail is entered in one application and passed on to another for recording. Drilldown is the ability to reverse the audit trail and display, application by application, the document back to its original entry into the Sage 300 ERP system. For example, in Sage 300 General Ledger (G/L), you can drilldown from General Ledger Transaction History to the Journal Entry, from the Journal Entry to the originating transaction in Accounts Receivable, and from the Invoice, Credit Note, or Debit Note, to the originating transaction in Order Entry.

The way this works is a bit cryptic in Sage 300 ERP’s database and this blog article will attempt to explain some of the internal workings so that developers and customizers who want to use this data for other purposes can hopefully figure out how to interpret it.

The documentation for the full drilldown infrastructure for third party developers is contained in Appendix L of the SDK’s Programming Guide.


Drilldown Database Fields

The drilldown fields in a document provide a link to the application that created the document. They are done in a generic way so any application (Sage or third party) can provide this information and their screens can be drilled down to. As a result the fields are fairly generic and it’s up to the drilldown target to provide what it needs when it creates the document. There are three fields, one is the source application (our usual two character application id like AP), then a drill down type (each application may have several document types like invoices or receipts), and last there is a generic link field which is a large number where the application packs in whatever it needs to do a link.

For example you can drill down from G/L Journal Entry back to the application that created the Journal. In the GLJEH table there are three fields: DRILAPP, DRILSRCTY, DRILLDWNLK. Suppose P/O creates a Journal Entry, it might populate DRILAPP with “PO”, DRILSRCTY with 3 (for Receipt) then DRILLDWNLK with 1740 (where 1740 is a link to PORCPH1.RCPHSEQ).

This is rather cryptic since these fields are meant to be internal to the application that will be drilled down to. But suppose you want to use these fields for other purpose. Here I’ll give a few examples of how Sage applications use these, which should help for many cases. Plus they will give an indication on how these are built so you can reverse engineer other cases.

Here are the ones used for I/C, O/E and P/O. These are pretty straight forward due to the way data is indexed in these applications. Here are the various types and links used in these applications.


Adjustment: 3: ICADEH.DOCUNIQ


Credit and Debit Note: 2: OECRDH.CRDUNIQ


Credit Note: 6: POCRNH1.CRNHSEQ
Debit Note: 7: POCRNH1.CRNHSEQ

A/R and A/P are a bit more difficult. Here they have to pack quite a bit of information into that field. A 10-byte BCD can hold up to 18 digits. Into this we want to pack the Posting Sequence Number, Batch Number and Entry Number. The way this works, the first digit is the size of the Posting Sequence Number, then the second digit is the size of the Batch Number. Then you have the Posting Sequence Number, then the Batch Number then the left over is the Entry Number. Since the first two digits are used for sizes, the sum of the lengths of the Posting Sequence Number, Batch Number and Entry Number must be less than or equal to 16.

For instance if the DRILLDWNLK is 222765000000000001 then the length of the Posting Sequence Number is 2 as is the length of the Batch Number. The Posting Sequence Number is 27, the Batch Number is 65 and the Entry is 1.

Drilldown View

Knowing the raw format is fine for some applications. But if you are operating in an environment with access to the Sage 300 Business Logic then you can call the application’s View to interpret this value for you and give it in the format of a UI to run and the parameters to pass it, to get the correct information displayed.

Here we will write a small .Net application that uses the Sage 300 API .Net API to process through the drill down information in the G/L Journal Header and process the A/P drill down information. You can find the project here, it is the Drilldown one.

Each application that supports drilldown has such a view. It is defined in its xx.ini file (in this case ap.ini) in the [setup] section there will be a DrillDownView=aannnn entry which specifies the drill down view (in this case AP0062). In the sample program, I just hard code the View and leave it as an exercise to the reader to generalize and load these from the .INI file.

Basically you use this view by setting the drill down type and link and then calling Process(). This then populates the other fields. This gives you a status field of whether you can drill down on this, a roto id of a UI to run and the parameters to pass the UI. Note that UI parameters are separated by line breaks.

So in this case we run the application we get lines specifying the drill down info followed by the drill down View’s interpretation of it. For instance:

Drill down info in GLJEH: AP 0 223055000000000001

UI Information to run for this: AP2100 MODE=1\nBATCH=55\nENTRY=1

Here is the main part of the code that processes this:

// Cycle through all of GLJEH and printout all the drill down information
 while (true == glJEH.Fetch(false))
        string drillSrce, drillLnk, rotoid, parameters, drillkey, drillInfo;
        int drillType;
        drillSrce = glJEH.Fields.FieldByName("DRILAPP").Value.ToString();
        drillType = Convert.ToInt32(glJEH.Fields.FieldByName("DRILSRCTY").Value.ToString());
        drillLnk = glJEH.Fields.FieldByName("DRILLDWNLK").Value.ToString();
        Console.WriteLine("Drilldown: " + drillSrce + " " + drillType + " " + drillLnk);
        if ( drillSrce.Equals("AP") )
                 apDrill.Fields.FieldByName("SRCETYPE").SetValue(drillType, false);
                 apDrill.Fields.FieldByName("DRILLDWNLK").SetValue(drillLnk, false);
                  drillInfo = apDrill.Fields.FieldByName("DRILLTYPE").Value.ToString();
                  rotoid = apDrill.Fields.FieldByName("ROTOID").Value.ToString();
                 parameters = apDrill.Fields.FieldByName("PARAMETERS").Value.ToString();
                 drillkey = apDrill.Fields.FieldByName("DRILLKEY").Value.ToString();
                Console.WriteLine(drillInfo + " " + rotoid + " " + parameters + " " + drillkey);


Drill down is a useful feature in Sage 300 ERP and hopefully this information helps people leverage the infrastructure for some new interesting customizations and integrations.

Written by smist08

February 27, 2015 at 8:48 am

On Calculating Dashboards

with 4 comments


Most modern business applications have some sort of dashboard that displays a number of KPIs when you first sign-in. For instance here area a couple of KPIs from Sage 300 ERP:


To be useful, these KPIs can involve quite sophisticated calculations to display relevant information. However users need to have their home page start extremely quickly so they can get on with their work. This article describes various techniques to calculate and present this information quickly. Starting with easy straight forward approaches progressing into more sophisticated methods utilizing the power of the cloud.

Simple Approach

The simplest way to program such a KPI is to leverage any existing calculations (or business logic) in the application and use that to retrieve the data. In the case of Sage 300 ERP this involves using the business logic Views which we’ve discussed in quite a few blog posts.

This usually gives a quick way to get something working, but often doesn’t exactly match what is required or is a bit slow to display.

Optimized Approach

Last week, we looked a bit at using the Sage 300 ERP .Net API to do a general SQL Query which could be used to optimize calculating a KPI. In this case you could construct a SQL statement to do exactly what you need and optimize it nicely in SQL Management Studio. In some cases this will be much faster than the Sage 300 Views, in some cases it won’t be if the business logic already does this.

Incremental Approach

Often KPIs are just sums or consolidations of lots of data. You cloud maintain the KPIs as you generate the data. So for each new batch posted, the KPI values are stored in another table and incrementally updated. Often KPIs are generated from statistics that are maintained as other operations are run. This is a good optimization approach but lacks flexibility since to customize it you need to change the business logic. Plus the more data that needs to be updated during posting will slow down the posting process, annoying the person doing posting.


As a next step you could cache the calculated values, so if the user has already executed a KPI once today then cache the value, so if they exit the program and then re-enter it then the KPIs can be quickly drawn by retrieving the values from the cache with no SQL or other calculations required.

For a web application like the Sage 300 Portal, rather than cache the data retrieved from the database or calculated, usually it would cache the JSON or XML data returned from the web service call that asked for the data. So when the web page for the KPI makes a request to the server, the cache just gives it the data to return to the browser, no formatting, calculation or anything else required.

Often if the cache lasts one day that is good enough, there can be a manual refresh button to get it recalculated, but mostly the user just needs to wait for the calculation once a day and then things are instant.

The Cloud

In the cloud, it’s quite easy to create virtual machines to run computations for you. It’s also quite easy to take advantage of various Big Data databases for storing large amounts of data (these are often referred to as NoSQL databases).

Cloud Approach

Cloud applications usually don’t calculate things when you ask for them. For instance when you do a Google search, it doesn’t really search anything, it looks up your search in a Big Data database, basically doing a database read that returns the HTML to display. The searching is actually done in the background by agents (or spiders) that are always running, searching the web and adding the data to the Big Data database.

In the cloud it’s pretty common to have lots or running processes that are just calculating things on the off chance someone will ask for it.

So in the above example there could be a process running at night the checks each user’s KPI settings and performs the calculation putting the data in the cache, so that the user gets the data instantly first thing in the morning, and unless they hit the manual refresh button, never wait for any calculations to be performed.

That helps things quite a bit but the user still needs to wait for a SQL query or calculation if they change the settings for their KPI or hits the manual refresh button. A sample KPI configuration screen from Sage 300 is:


As you can see from this example there are quite a few different configuration options, but in some sense not a truly rediculous number.

I’ve mentioned “Big Data” a few times in this article but so far all we’ve talked about is caching a few bits of data, but really the number of these being cached won’t be a very large number. Now suppose we calculate all possible values for this setup screen. Use the distributed computing powe of the cloud to do the calculations and then store all the possibilities in a “Big Data” database. This is much larger than we talked about previously, but we are barely scratching the surface of what these databases are meant to handle.

We are using the core functionality of the Big Data database, we are doing reads based on the inputs and returning the JSON (or XML or HTML) to display in the widget. As our cloud grows and we add more and more customers, the number of these will increase greatly, but  the Big Data database will just scale out using more and more servers to perform the work based on the current workload.

Then you can let these run all the time, so the values keep getting updated and even the refresh button (if you bother to keep it), will just get a new value from the Big Data cache. So a SQL query or other calculation is never triggered by a user action ever.

This is the spider/read model. Another would be to sync the application’s SQL database to a NoSQL database that then calculates the KPIs using MapReduce calculations. But this approach tends to be quite inflexible. However it can work if the sync’ing and transformation of the database solves a large number of queries at once. Creating such a database in a manner than the MapReduce queries all run fast is a rather nontrivial undertaking and runs the risk that in the end the MapReduces take too long to calculate. The two methods could also be combined, phase one would be to sync into the NoSQL database, then the spider processes calculate the caches doing the KPI calculations as MapReduce jobs.

This is all a lot of work and a lot of setup, but once in the cloud the customer doesn’t need to worry about any of this, just the vendor and with modern PaaS deployments this can all be automated and scaled easily once its setup correctly (which is a fair amount of work).


There are lots of techniques to produce/calculate business KPIs quickly. All these techniques are great, but if you have a cloud solution and you want its opening page to display in less that a second, you need more. This is where the power of the cloud can come in to pre-calculate everything so you never need to wait.

Written by smist08

February 14, 2015 at 7:25 pm

The Sage 300 SDLC

leave a comment »


A lot of my blog posts are to answer questions that I frequently receive. Then I have a ready answer of a blog posting link, or perhaps people read my blog and it saves me receiving an e-mail. This blog posting is along the same lines as I get asked quite frequently about our SDLC (Software Development Lifecycle). Usually this is in regards to someone trying to fill out a giant RFP full of questions that are mostly irrelevant to purchasing ERP software.

I covered various aspects of our development process in other blog posting which I’ll refer to here. Plus our process is always evolving as we learn from our experiences and try to improve. What I’m writing about here is specifically what the development team for Sage 300 ERP does, but a lot of it is also used by other Sage teams on other projects. There are always slight variations as the different teams do have their own preferences and as long as they follow the general standards that’s ok.

Within R&D we use the Agile Development Methodology, but R&D exists within a larger context within Sage, much of which doesn’t use Agile frameworks. As a result our Agile development has to fit somewhat within a larger non-Agile system that tracks and coordinates the various projects going on around Sage. This is to ensure all departments know what is going on and can plan accordingly.


We have a general PMO department that tracks all the various projects. It coordinates getting approval for projects, determining release criteria and coordinating all the various departments such as Marketing, Product Management, IS, etc. So they can do their pieces at the appropriate time.

Initial product ideas come from our Innovation Process, but before converting an innovation idea into a larger development effort there is usually some POC work and some initial rough estimates that then lead to a project kickoff process where an initial business plan, along with an initial project plan are presented and either approved or rejected.

The project is then tracked by PMO as it goes through development and then at the end when the Agile part of the development is done, there is a release certification meeting where all the stakeholders get together that the solution is ready for customers. This includes that the software is ready and of a high quality, but also that support is ready, training material is available, back end systems are setup to take orders, marketing is ready and all the other pieces that go into a product launch.


Also at this time we run a final regression to ensure that everything is working correctly. Generally this is only a couple of weeks as a sanity check that the Agile process below worked correctly.

Before making the product available to all customers, we first spend a few months in a controlled release with a handful of live customers to ensure everything works well for them. This not only tests the software, but their ability to be on-boarded, supported and trained. After this has proceeded successfully then the product is made available for everyone.

Some of these reasons for this non-Agile framework is that a number of parts of our organization having adopted Agile yet and eventually they will need to if we are to have truly effective cloud based products. The other reason that I blogged about here, is the need to coordinate many disparate teams working on different parts of a larger solution.


Within R&D we use the Agile Development Methodology. I’ve blogged about Agile development a number of times: here, here, here, here and here.

We’ve been using Agile programming for a number of years now. We use 2 week sprints, have sprint planning, maintain a backlog, have daily standups, sprint demos, sprint retrospectives and all the other aspects of Agile. We use VersionOne to manage our projects. In Agile you execute the backlog story by story and have very tight rules on what it means for a story to be “done”. This way as each story is completed, it is fully tested, documented and ready for use. The important thing here is to not build up a large list of defects that need to be fixed near the end of the project. Basically when the last story is finished (done) then the product should be ready to put in customer’s hands.


The doneness criteria vary a bit by Agile team, but here are the doneness criteria for a team on one of our current projects:

  • All tasks items in a backlog story have a closed status
  • The code builds successfully without compiler errors and warnings and without ReSharper issues
  • The code is deployed to test environment successfully
  • The code is checked into the correct repository and branch
  • The code conforms to Sage Branding Guidelines.
  • The code performs to performance standards
  • The code is reviewed for applicable use of all documented standards.
  • The code is refactored and reviewed for good OOP implementation
  • The code conforms to UX wireframes, design and CSS guidelines.
  • Code coverage minimum percentage is 70% with a target of +90%
  • The UI displays correctly in all supported browsers (IE, Safari, Firefox, Chrome)
  • Unit tests are included and run successfully
  • Automation tests are included and run successfully in test environment
  • The environment has not been corrupted (database, etc.)
  • The QA and QA Review tasks are included and complete
  • Reviewed and accepted by the Product Owner
  • Implement and document any build and/or deployment changes
  • Knowledge Transfer Activities (wiki updated, code walkthrough, group code review, etc.)
  • Remaining hours for tasks are set to zero and the backlog item is closed


Ensuring stories are done is the key to a well-run Agile process, probably more important than a well-structured prioritized backlog.

As part of developing for the cloud, we want to release fairly regularly and can’t afford long manual regression tests. As a result we have a lot of emphasis on Unit Tests and Automated tests, such as those blogged here.

Similarly branching strategy, source code management, build management and continuous integration are also important parts of this process.


This was a quick overview of our Sage 300 SDLC. With any big project there are always a lot of moving parts and these have to be tracked accurately and reliably. Agile doesn’t mean that we don’t have deadlines or firm requirements. It does mean that we develop the most important things first and build quality in from the very start.

How to Run Customized Sage 300 Screens from Sage CRM

with 2 comments


From the Sage 300 ERP to Sage CRM integration there is the ability to run a number of Sage 300 ERP screens. These are the older VB screens being run as ActiveX controls from the IE browser. Not to be confused with the newer Quote to Order web based screens. A common request is how to customize these screens to you run the customized screen from Sage CRM rather than the base screen.

This blog posting covers how to run customized screens from Sage CRM. As a bonus, as part of this it also shows how to wrap a Sage 300 screen, so that it handles version updates seamlessly and doesn’t require you to re-compile your solution when we release a new version of the base screen. As a result this mechanism requires you use VB to wrap the base control for deployment. The ideas presented here probably can be ported to other programming systems, but it may not be easy.

A sample project that wraps Order Entry is located on Google Drive here. This project will be used for most of the examples in the document, so feel free to load it up and follow along.  In order to view the wrapper, simply unzip the file, and open up the CRMOEOrderUI.vbp.

Create the Wrapper

The following instructions will show the basic steps on how to create a Sage 300 UI Browser Wrapper.  The wrapper can then be referenced by an ASP page. There should be a constant interaction between the UI, the wrapper, and the ASP page (ie. UI calls UI_OnUIAppOpened in the wrapper, the wrapper raises the UIWasUnLoaded event to the ASP page, and the ASP page in turn catches the event, and closes the window containing the wrapper (and attached Accpac UI).


1. Open up Visual Basic and select a new Active X Control. Click Open.


2. Go to Project/ References, and select ACCPAC COM API Object, ACCPAC Data Source Control, ACCPAC Signon Manager, VB IObjectSafety Interface, ACCPAC Application Installer, and ACCPAC Session Manager.


3. The project name determines the name of the wrapper (OCX).  In this case, the wrapper name will be “eCRMOEOrderUI”.

4. The name that you give the UserControl should be descriptive of what is contained on it.  In this case, give the UserControl the same name as the Accpac UI that is wrapped (in this case, OEOrderUI).


5. When you are coding refer to the Accpac UI as “UserControl” (ie. UserControl.Width, UserControl.height).

6. We use the VBControlExtender to wrap the Order Entry OCX control dynamically when UserControl_Show is called (see code for UserControl_Show accompanied with this document). When referencing elements and methods within the Order Entry OCX control you would use ctlDynamic.object. The control is installed and opened using the AccpacOcxRegHelper.CLS which makes entries in to the registry that allows the VBControlExtender to reference the control by name as opposed to CLSID which is returned from Roto.


7. Now you are ready to begin writing the code that will catch the events thrown by the Accpac UI, and raise your own events to the ASP that will contain your wrapper.

8. Go into your code view and begin instantiating your events, objects, and variables.

9. Begin by declaring your objects that are going to handle events thrown by the AccpacDataSource controls in the related Accpac OCX controls.  In this case, event handlers of the AccpacOE1100 class are being declared so that they can detect the events thrown by the class.


10. Next, declare the events that you will want to raise to the ASP page.


11. Declare your public variables


12. Declare your remaining variables.  In this case, mSignonMgr is going to be used to sign on the Accpac UI with the signon manager so that the signon screen does not keep popping up every time that the UI is loaded.  mlSignonID is going to be the signon ID.


13. Outline your functions that will be called by the ASP Page.  In this case, the ASP page will give the values that are to be used to populate the UI, or to insert the customer ID into the UI’s customer field for a new customer quote.

14. Next, list out the events that can be called by the UI AccpacDataSources.  In the screenshot below, you can see that the wrapper is checking the eReason variable being passed, and depending on what eReason is being passed, a different event will be raised to the ASP page (AddNew, Delete etc) in the RaiseEventEX sub.



15. Other functions are also called by the Accpac UI. The wrapper will be notified of these events through ctlDynamic_ObjectEvent (see below). Once the UI has opened ctlDynamic_ObjectEvent is called with an event name of  “OnUIAppOpened” and a private sub UI_OnUIAppOpened is called and objects in the wrapper are initialized, and the UIWasLoaded event is raised to the ASP page notifying it that the UI has been opened.

16. Finally, define the Get properties that are available to the ASP page so that it can resize its windows when the UI has been loaded onto the ASP page.  In this case, the ASP page will resize its windows to be the same width, height, and unit of measurement as the UI.


17. Now, you have successfully entered all the code that the wrapper will use to receive the function calls from the UI, as well as raise the events to the ASP page.

Customize the Sage CRM ASP Page

You now have a wrapped OCX now you can follow the ASP page in Sage CRM (for example, OE_OrderUI.asp as follows) to call your customized OCX.


Then it will open the OE Order Entry screen for order ORD000000000076.

In OE_OrderUI.asp file, it has following code:



eCRMOEOrderUI raises the following events:

UIWasLoaded(), UIWasUnLoaded(), AddNew(), Delete(), Update(), FieldChange(), Init(), Read(), Fetch()

eCRMOEOrderUI exposes the following Properties:

UIWidth(Read Only), UIHeight(Read Only), TwipsPerPixelX(Read Only), TwipsPerPixelX(Read Only)

eCRMOEOrderUI exposes the following Functions:

PopulateUI(OrderID As String, CustomerID As String);

CreateNewQuote(CustomerID As String);


<SCRIPT for=”eCRMOEOrderUI” Event=”UIWasLoaded()”>

var width  = eCRMOEOrderUI.UIWidth / eCRMOEOrderUI.TwipsPerPixelX;

var height = eCRMOEOrderUI.UIHeight / eCRMOEOrderUI.TwipsPerPixelY;

if ((BrowserDetect.browser==”Explorer”) && (BrowserDetect.version >= 7))


width  += 35;

height += 130;




width  += 35;

height += 100;


var left = (screen.width – width) / 2;

var top = (screen.height – height) / 2;

window.resizeTo(width, height);


PopulateUI(<%=EnESCDocNum%>, <%=EnESCCustomer%>);

width  = eCRMOEOrderUI.UIWidth  / eCRMOEOrderUI.TwipsPerPixelX;

height = eCRMOEOrderUI.UIHeight / eCRMOEOrderUI.TwipsPerPixelY;

BorderWidth  = ClientWidth()  – width;

BorderHeight = ClientHeight() – height;

bLoaded = true;




Hopefully you find this helpful in customizing Sage 300 ERP screens. Even if you don’t run them from Sage CRM, not having to re-build them for each Product Update can save you some time.

Written by smist08

October 11, 2014 at 4:14 pm


Get every new post delivered to your Inbox.

Join 290 other followers