Stephen Smith's Blog

Musings on Machine Learning…

Posts Tagged ‘.net api

Performance and the Sage 300 Views Part 2

with 2 comments

Introduction

Last week we discussed avoiding table scans when using the Sage 300 ERP APIs. This week we are going to look at some other issues to do with updating data and with processing meta-data.

Last week I showed a cheetah running as an example of performance and speed (the fastest land animal), but this week here she is resting and getting some attention.

cheetah3

AOM/UI Info/ViewDoc

First, just if you are wondering where to find out what indexes a View supports, there are quite a few tools to determine this. Plus you can always look in SQL Management studio, but then you won’t know which index it is by our numbering scheme. Anyway ViewDoc is a good tool that comes with the SDK that gives this information. UI Info comes with System Manager and can drill down through the UI Info to get detailed View Info. Then there is the Sage 300 Application Object Model (AOM) located here. Just note that to use the AOM, you must use Internet Explorer for some obscure reason.

Updating Data

Often if you are manipulating lots of records it’s in a header/detail situation. In this case all the database operations are done when you insert or update the header. The nice things about this is that the Views know a lot about our database API and will do this in an optimal manner so you don’t need to worry about it. Similarly if you delete a header, the View will delete all attendant details for you in an efficient manner.

But suppose you want to update a bunch of records using our .Net API and want to know the most efficient way to do this. Say we want to add something to the end of every A/R Customer Name. Our easy brute force way to do this would be:

arCUS.RecordClear();
while (arCUS.Fetch(false))
{
arCUS.Fields.FieldByName(“NAMECUST”).SetValue(
arCUS.Fields.FieldByName(“NAMECUST”).Value + “A”, false);
arCUS.Update();
}

 

This works but you might find it a bit slow. We can speed it up quite a bit by bracketing the whole thing in a database transaction:

mDBLinkCmpRW.TransactionBegin();
arCUS.RecordClear();
while (arCUS.Fetch(true))
{
arCUS.Fields.FieldByName(“NAMECUST”).SetValue(
arCUS.Fields.FieldByName(“NAMECUST”).Value + “A”, false);
arCUS.Update();
}
mDBLinkCmpRW.TransactionCommit();

 

The times from the sample program (the same one as last week but with a bit added) is:

Time to update all customers: 00:00:00.087
Time to update all customers in a transaction: 00:00:00.038

So putting things in a database transaction helped. This is for Sample Data so there are only a few customers. The updated sample program is the PerformanceSamples project located here (both folder and zip file).

Database Transactions

Generally when using our API you don’t need to worry about database transactions, but occasionally like in the above example they are necessary. In the above example the first method has the side effect that each update is done in a separate transaction. That means you have the overhead of starting and committing a transaction with every record update. In the second example we start a transaction so all the records are committed as a single transaction. Strictly speaking the two examples don’t do the same things, if the first example throws an exception part way through then all the updates done up to that point will be in the database, whereas in the second example they will be discarded since the transaction will be rolled back. This difference can be quite important if there are database integrity issues to consider. Generally Sage 300 ERP uses transactions to go from one state where the database has full integrity to another. This way we can rely on database transactioning to always maintain full database integrity.

There is overhead to setting up and committing a transaction, but there are also resources used for every operation done inside a transaction. At some point the above example will start to slow down if you have too many A/R customers. Generally you might want to commit the transaction every thousand customers or so for optimal performance (but make sure you maintain database integrity along the way).

Also keep in mind that while records are updated in a transaction they will become locked from the point of update through to the end of the transaction, so updating a lot of records in a transaction will lock a lot of records and cause anyone else going to read that record to have to wait until your transaction completes. So try to keep transactions quick. Definitely don’t do any UI type operations in the middle of a transaction (like asking the user a question).

Revisioned Views

Revision List type views will store all insert/updates/deletes in memory until you call Post. Generally these are detail views and you don’t see this functionality because it’s handled by the header. But occasionally you may need to deal with one of these (like perhaps GLAFS). In this case since each Post is a transaction, you just need to be aware of how often you call it as this will have the same effect on performance as mentioned above.

Deleting

Although you can delete records as above just replacing the Update with a Delete call, there is a better way. The Views have a FilterDelete method where you pass in a browse filter and all the records that match will be deleted. This will prove to be quite a bit faster than the above.

Meta-Data

If you run RVSpy with all the View calls selected you will see a lot of meta-data calls, getting information on fields and such. Generally meta-data calls are quite fast and don’t involve going to the database. However if you really go crazy you can slow things down quite a bit. If you make everything dynamic then you could end up making lots of meta-data calls and cumulatively these slow you down a bit. Similarly using constants in things like getting fields are slightly faster than passing field names because you avoid a dictionary lookup (admittedly quite fast but not as fast as direct access). Mostly people exercise good judgement and don’t go too wild driving everything from meta-data, but we have seen some crazy cases.

Summary

Just a quick overview of some performance tips. Hopefully these all help to make your use of the Sage 300 API more efficient.

 

Performance and the Sage 300 Views Part 1

with 5 comments

Introduction

The Sage 300 ERP Views (Business Logic) give you a great deal of power to perform Accounting operations through our various APIs. However as in any programming, performance must always be taken into account. The Sage 300 ERP Views have a lot of features to help you perform operations with good performance, but like anything if they are used incorrectly, performance can be miserable.

This article is going to talk about various features and options that you can take advantage of to improve your application’s performance. As I am writing the article, it’s getting quite long, so I think I’m going to break it into two parts.

cheetah2

Measure and Test

One of the big mistakes people make when performance tuning, is to just make assumptions and changes without doing real measurements. If you have your code in a source control system, first establish a base line for how long something takes, then make you changes and re-measure the time. Only check in your changes if the time is faster, if it isn’t then you are just churning your code and potentially adding bugs. Performance is subtle and often the best ideas and intentions just make a process slower.

Multi-User versus Single-User Performance

This article is about optimizing processes for single users. Often if you want to optimize better multi-user throughput then it’s all about reducing locks and keeping resource usage down. Sometimes these goals align, i.e. 1 person doing something quicker translates to 100 people doing things quicker, sometime they are opposing, i.e. one person can do something way quicker if he takes over all available resources at the detriment to everyone else.

Read-Only versus Read-Write

You can open our database links and views either in read-write mode or read-only mode. Generally if you aren’t updating the data then you want to open in read-only mode as this makes things quite a bit faster. If you might update the data then we have to use more expensive SQL operations so that if you do update the data, the update is fast and multi-user considerations are handled. If you open a table or link read-only then we use much lighter weight SQL operations and the data is returned much quicker. Finders use this to display their data quicker.

FilterSelect/FilterFetch versus Browse/Fetch

When you Browse/Fetch you can always update or delete the record fetched. As mentioned above that can introduce extra overhead and slow things down. Making the table or link read-only will help Browse/Fetch, but perhaps a better method is to use the FilterSelect/FilterFetch methods which are better optimized for SQL Server than Browse/Fetch. The results from these can’t be updated or deleted but at the same time the access method is always light weight whether the link is open read-only or read-write.

Indexes

Sage 300 will always use an index to read data. We have a lot of code to optimize access based on available indexes. If you use the indexes provided your code will be much faster.

For example, suppose you want to know if there are any open G/L Batches. A quick bit of code to do this is:

glBCTL.Browse(“BATCHSTAT=1”, true);
bool isOpenBatch = glBCTL.GoTop();

This works pretty good on sample data, but then you go to a client, suddenly this becomes quite slow. The reason is that since BATCHSTAT isn’t part of the primary index, the GoTop basically goes looking through the Batch table until it reaches the end or finds an open batch. Since open batches are usually at the end, this tends to be sub-optimal. Practically you could speed this up by searching through the table backwards since then you would probably find one quicker, but if there are no open batches you still search the whole table. Fortunately there is a better way. The GLBCTL table has two indexes, one is its primary default index of BATCHID and the other secondary index is on BATCHSTAT and BATCHID (to make it an index without duplicates). So it makes sense to use this index:

glBCTL.Order = 1;
glBCTL.Browse(“BATCHSTAT=1”, true);
isOpenBatch = glBCTL.GoTop();

Simple adding the Order property makes this search much quicker. I included a sample program with timers and the full code. The results on sample data show the speed difference (not that it was all that slow to start with):

Time to determine if there are open batches: 00:00:00.034
Time to determine if there are open batches take 2: 00:00:00.007

The sample program is located here. Its PerformanceSamples one (folder and zip).

So generally you want to use an index that matches the fields that you are searching on as much as possible. Usually having clauses in your browse filter that uses the index segments from left to right will result in the fastest queries.

This example may look a little artificial, but once you get into the operational modules like O/E and P/O this becomes crucial. That is because the main tables like the Order Header have a uniquifier as the primary index. When you want to look something up it’s usually by something like order number and to do this efficiently you have to use an alternate index. So once you are using these modules you will be using alternate indexes a lot. In these modules also be careful that quite a few alternate indexes allow duplicates, so you might get back quite few records unexpectedly.

RVSpy/DBSpy

RVSpy and DBSpy are good tools for identifying bad behavior. The logs contain time information so you can see where the time is being used, but more often than not doing something bad for performance results in a series of operations appearing over and over in these logs. Usually scrolling to the middle of the output file is a good way to see something going awry. You can also use SQLTrace or ODBCTrace, but I find these slightly less useful.

When using RVSpy for this purpose, it helps to turn off logging to a Window (slow) and only log to a file (make sure you specify one). Further choose the View calls you want to log, usually disabling anything to do with meta-data and anything that is field level.

So if you see output like:

[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.58].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.58;t=0;ovh=0] {}
[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.58].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.59;t=0;ovh=0] {}
[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.59].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.59;t=0;ovh=0] {}
[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.59].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.60;t=0;ovh=0] {}
[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.60].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.60;t=0;ovh=0] {}

Going on for pages and pages then you have something wrong.

Avoid Table Scans

Most of this article is about avoiding table scans, but just to re-iterate table scans are bad. People are often fooled by testing on sample data. Many of the tables in sample data are quite small and it doesn’t really matter what you do. However in the real world with real customer databases things will usually be quite different. For instance sample data has 9 tax authorities, which you might think is reasonable. But in the USA where any municipal government agency can charge a sales tax, there are over 35,000 tax authorities. If you read all these (like to populate a combo-box to pick one from), then you will run very slowly and your customers will be unhappy.

Summary

Sage 300 ERP has many mechanisms to access and manipulate data efficiently. But as with anything in programming, if you use APIs without due care and attention then performance (and quality in general) will suffer.

Written by smist08

March 10, 2015 at 9:44 pm

Drilldown in Sage 300 ERP

with 3 comments

Introduction

Much accounting detail is entered in one application and passed on to another for recording. Drilldown is the ability to reverse the audit trail and display, application by application, the document back to its original entry into the Sage 300 ERP system. For example, in Sage 300 General Ledger (G/L), you can drilldown from General Ledger Transaction History to the Journal Entry, from the Journal Entry to the originating transaction in Accounts Receivable, and from the Invoice, Credit Note, or Debit Note, to the originating transaction in Order Entry.

The way this works is a bit cryptic in Sage 300 ERP’s database and this blog article will attempt to explain some of the internal workings so that developers and customizers who want to use this data for other purposes can hopefully figure out how to interpret it.

The documentation for the full drilldown infrastructure for third party developers is contained in Appendix L of the SDK’s Programming Guide.

drilldwn

Drilldown Database Fields

The drilldown fields in a document provide a link to the application that created the document. They are done in a generic way so any application (Sage or third party) can provide this information and their screens can be drilled down to. As a result the fields are fairly generic and it’s up to the drilldown target to provide what it needs when it creates the document. There are three fields, one is the source application (our usual two character application id like AP), then a drill down type (each application may have several document types like invoices or receipts), and last there is a generic link field which is a large number where the application packs in whatever it needs to do a link.

For example you can drill down from G/L Journal Entry back to the application that created the Journal. In the GLJEH table there are three fields: DRILAPP, DRILSRCTY, DRILLDWNLK. Suppose P/O creates a Journal Entry, it might populate DRILAPP with “PO”, DRILSRCTY with 3 (for Receipt) then DRILLDWNLK with 1740 (where 1740 is a link to PORCPH1.RCPHSEQ).

This is rather cryptic since these fields are meant to be internal to the application that will be drilled down to. But suppose you want to use these fields for other purpose. Here I’ll give a few examples of how Sage applications use these, which should help for many cases. Plus they will give an indication on how these are built so you can reverse engineer other cases.

Here are the ones used for I/C, O/E and P/O. These are pretty straight forward due to the way data is indexed in these applications. Here are the various types and links used in these applications.

IC:

Receipt: 1: ICREEH.DOCUNIQ
Shipment: 2: ICSHEH.DOCUNIQ
Adjustment: 3: ICADEH.DOCUNIQ
Transfer: 4: ICTREH.DOCUNIQ
Assembly: 5: ICASEN.DOCUNIQ

OE:

Shipment: 3: OESHIH.SHIUNIQ
Invoice: 1: OEINVH.DAYENDNUM
Credit and Debit Note: 2: OECRDH.CRDUNIQ

PO:

Receipt: 3: PORCPH1.RCPHSEQ
Invoice: 5: POINVH1.INVHSEQ
Return: 4: PORETH1.RETHSEQ
Credit Note: 6: POCRNH1.CRNHSEQ
Debit Note: 7: POCRNH1.CRNHSEQ

A/R and A/P are a bit more difficult. Here they have to pack quite a bit of information into that field. A 10-byte BCD can hold up to 18 digits. Into this we want to pack the Posting Sequence Number, Batch Number and Entry Number. The way this works, the first digit is the size of the Posting Sequence Number, then the second digit is the size of the Batch Number. Then you have the Posting Sequence Number, then the Batch Number then the left over is the Entry Number. Since the first two digits are used for sizes, the sum of the lengths of the Posting Sequence Number, Batch Number and Entry Number must be less than or equal to 16.

For instance if the DRILLDWNLK is 222765000000000001 then the length of the Posting Sequence Number is 2 as is the length of the Batch Number. The Posting Sequence Number is 27, the Batch Number is 65 and the Entry is 1.

Drilldown View

Knowing the raw format is fine for some applications. But if you are operating in an environment with access to the Sage 300 Business Logic then you can call the application’s View to interpret this value for you and give it in the format of a UI to run and the parameters to pass it, to get the correct information displayed.

Here we will write a small .Net application that uses the Sage 300 API .Net API to process through the drill down information in the G/L Journal Header and process the A/P drill down information. You can find the project here, it is the Drilldown one.

Each application that supports drilldown has such a view. It is defined in its xx.ini file (in this case ap.ini) in the [setup] section there will be a DrillDownView=aannnn entry which specifies the drill down view (in this case AP0062). In the sample program, I just hard code the View and leave it as an exercise to the reader to generalize and load these from the .INI file.

Basically you use this view by setting the drill down type and link and then calling Process(). This then populates the other fields. This gives you a status field of whether you can drill down on this, a roto id of a UI to run and the parameters to pass the UI. Note that UI parameters are separated by line breaks.

So in this case we run the application we get lines specifying the drill down info followed by the drill down View’s interpretation of it. For instance:

Drill down info in GLJEH: AP 0 223055000000000001

UI Information to run for this: AP2100 MODE=1\nBATCH=55\nENTRY=1

Here is the main part of the code that processes this:

// Cycle through all of GLJEH and printout all the drill down information
 while (true == glJEH.Fetch(false))
 {
        string drillSrce, drillLnk, rotoid, parameters, drillkey, drillInfo;
        int drillType;
        drillSrce = glJEH.Fields.FieldByName("DRILAPP").Value.ToString();
        drillType = Convert.ToInt32(glJEH.Fields.FieldByName("DRILSRCTY").Value.ToString());
        drillLnk = glJEH.Fields.FieldByName("DRILLDWNLK").Value.ToString();
        Console.WriteLine("Drilldown: " + drillSrce + " " + drillType + " " + drillLnk);
        if ( drillSrce.Equals("AP") )
        {
                 apDrill.Fields.FieldByName("SRCETYPE").SetValue(drillType, false);
                 apDrill.Fields.FieldByName("DRILLDWNLK").SetValue(drillLnk, false);
                  apDrill.Process();
                  drillInfo = apDrill.Fields.FieldByName("DRILLTYPE").Value.ToString();
                  rotoid = apDrill.Fields.FieldByName("ROTOID").Value.ToString();
                 parameters = apDrill.Fields.FieldByName("PARAMETERS").Value.ToString();
                 drillkey = apDrill.Fields.FieldByName("DRILLKEY").Value.ToString();
                Console.WriteLine(drillInfo + " " + rotoid + " " + parameters + " " + drillkey);
         }
  }

Summary

Drill down is a useful feature in Sage 300 ERP and hopefully this information helps people leverage the infrastructure for some new interesting customizations and integrations.

Written by smist08

February 27, 2015 at 8:48 am

Using the Sage 300 .Net Helper APIs

with 8 comments

Introduction

The main purpose of our .Net API is to access our Business Logic Views which I’ve blogged on in these articles:

An Introduction to the Sage 300 ERP .Net API
Starting to Program the Sage 300 ERP Views in .Net
Composing Views in the Sage 300 ERP .Net API
Using the Sage 300 ERP View Protocols with .Net
Using Browse Filters in the Sage 300 ERP .Net API
Using the Sage 300 .Net API from ASP.Net MVC
Error Reporting in Sage 300 ERP
Sage 300 ERP Metadata
Sage 300 ERP Optional Fields

However there are a number of simple things that you need to do repeatedly which would be a bit of a pain to use the Views every time to do these. So in our .Net API we provide a number of APIs that give you efficient quick mechanisms to access things like company, fiscal calendar and currency information.

With Sage Summit 2014 just a few weeks away (you can still register here), I can’t pre-empt any of the big announcements here in my blog (as much as I’d like to), so perhaps a bit of an easier .Net article instead. For many these examples are fairly simple, but I’m always getting requests for source code, and I happen to have a test program that exercises these APIs that I can provide as an examples. This program was to help ensure and debug these APIs for our 64 Bit/Unicode version which might indicate why it tends to print rather a strange selection of fields from some of the classes.

Sample Program

The sample program for this article is a simple WinForms application that uses the Sage 300 ERP API to get various information from these helper classes and then populates a multi-line edit control with the information gathered. The code is the dotnetsample (folder or zip) located in the folder on Google Drive at this URL. The code is hard coded to access SAMLTD with ADMIN/ADMIN as the user id and password. You may need to change this in the session.open to match what you have installed/configured on your local system. I’ve been building and running this using Visual Studio 2013 with the latest SPs and the latest .Net.

dotnetsample

The Session Class

The session class is the starting point for everything else. Besides opening the session get establishing the DBLink, you can use this class to get some useful version information as well as some information about the user like their language.

The DBLink Class

From the session you get a DBLink object that is then your connection to the database and everything in it. From this object you can open any of our Business Logic Views and do any processing that you like. Similarly you can also get quick access to currency and fiscal calendar information from here. Of course you could do much of this by opening various Common Service Views, but this would involve quite a few calls. Additionally the helper APIs provide some caching and calculation support.

The Company Class

Accessing the company property from the DBLink object is your quick shortcut to the various Company options information stored in Common Services in the CS0001 View. This is where you get things like the home currency, number of fiscal periods, whether the company is multi-currency or get address type information. Generally you will need something from here in pretty much anything you do.

The FiscalCalendar Class

You can get a FiscalCalendar object from the FiscalCalendar property of the DBLink. In accounting fiscal periods are very important, since everything is eventually recorded in the General Ledger in a specific fiscal year/fiscal period. G/L mostly doesn’t care about exact dates, but really cares about the fiscal year and period. For accurate accounting you always have to be very careful that you are putting things in the correct fiscal year and periods. In Common Services we setup our financial years and fiscal periods assigning them various starting and ending dates. Corporate fiscal years don’t have to correspond to a calendar year and usually don’t. For instance the Sage fiscal year starts on October 1, and ends on September 30.

This object then gives you methods and properties to get the starting and ending dates for fiscal periods, years or quarters. Further it helps you calculate which fiscal year/period a particular date falls in. Often all these calculations are done for you by the Views, but if you are entering things directly into G/L these can be quite useful. Some of the parameters to these methods are a bit cryptic, so perhaps the sample program will help with anyone having troubles.

The Currency Classes

There are several classes for dealing with currencies, there are the Currency, CurrencyTable and CurrencyRate classes. You get these from the DBLink’s GetCurrency, GetCurrencyTable and GetCurrencyRate methods. There is also a GetCurrencyRateTypeDescription method to get the description for a given Currency Rate Type.

The Currency object contains information for a given currency like the description, number of decimals and decimal separator. Combined with the Currency Rate Type, we have a Currency Table entry for each Currency Code and Currency Rate Type. Then for each of these there are multiple CurrencyRate’s for each Currency on a given date.

So if you want to do some custom currency processing for some reason, then these are very useful objects to use. The sample program for this article has lots of examples of using all of these.

Remember to always test your programs against a multi-currency database. A common bug is to do all your testing against SAMINC and then have your program fail at a customer site who is running multi-currency. Similarly it helps to test with a home currency like Japanese Yen that doesn’t have two decimal places.

Summary

This was just a quick article to talk about some of the useful helper functions in our Sage 300 ERP .Net API that help you access various system data quickly. You can perform any of these functions through the Business Logic Views, but since these are used so frequently, they save a lot of programming time.

Written by smist08

July 19, 2014 at 5:48 pm

The Sage 300 System Manager Core DLLs

with 9 comments

Introduction

We hold a developer’s exchange (DevEx) every couple of weeks where one of our developers volunteers to present to all the other developers in our office. This past week I presented at the DevEx on what all the core DLLs in our Sage 300 runtime folder do. I thought this might be of interest for a wider audience so here are the gory details.

Architecture

Our marketing supplied architecture diagram is the following which highlights our three tiers and hide a lot of the details of how the object repository, APIs and supporting services are implemented. I’ve blogged previously on our Business Logic Views. In this article I’m going to go into more detail on all the DLLs that provide the framework to support all of this.

arch

Lower Level DLLs

If you are an ISV developing Sage 300 SDK applications or have worked for Sage on the 300 product then you will have had to encounter a number of these DLLs. I’m only looking at a subset of current DLLs, and I’m not looking at all the DLLs that support older technologies that are still present to maintain compatibility with add-ons.

lowleveldlls

I didn’t add arrows to this diagram since everything pretty well calls everything else below it. But segregated the DLLs a bit by how low or high level they are. So here is a quick synopsis of each one:

A4wcompat.dll: We created this DLL back when we did a native port of the Sage 300 Views for Linux. This DLL isolates operating system differences that need more than some clever #defines. A big part of this is the thread and process synchronization and locking support. Even though we never released the native Linux version, this isolation of operating system dependent parts had made adding multi-threading support, 64 bit support and Unicode support easier.

A4wmem32.dll: In 16 bit Windows, the built in memory management was really slow, so everyone used their own. Now this DLL uses the Windows and C default memory management, but is still important for global memory that needs to be shared across processes. Originally this was done through the data segment of a fixed DLL, but now is done through memory mapped files.

A4wlleng.dll: This is just a language DLL that holds some lower level error messages used by System manager.

A4wsqls.dll: This is the SQL Server database driver (there is also a4worcl.dll for Oracle and a4wbtrv.dll for Pervasive.SQL). This is dynamically loaded based on the type of database you are connecting to. For more on our database support see this article.

Cato3msk.dll, cato3dat.dll: The cato3 DLLs are the old CA common controls. We don’t use these in our UIs anymore, but cato3msk.dll provides our mask processing that is used by the Views. Similarly we don’t use this date control, but do use a routine here to format dates in error messages correctly.

A4wroto.dll: This handles the loading of the various View DLLs as well as the various UIs we’ve used in the past. It loads the roto.dat files and handles loading the right DLLs when View subclassing is going on or stub Views need to be used.

A4wsem.dll: This handles the locking of the semaphor.bin file. It allows processes to lock the company database, an application or the whole site. It also handles application specific cross workstation locking needs.

A4wrv.dll: This is the main DLL API entry point for the Views. It manages all the calling of the Views and handles other tasks like sending the calls for macro recording. For more on our View interfaces see this article.

A4wapi.dll: This is quite a hodge-podge of services for the Views like revision lists, error reporting and such. It also has support routines for the older CA-Realizer UIs. This is quite a big DLL and has most of our C level API in it.

A4wrpt.dll: This is our interface to Crystal Reports, it started as our interface to CA-RET then was converted to Crystal using their CRPE DLL interface, then converted to Crystal’s COM interface and now uses Crystal’s .Net Interface.

A4wprgt.dll: This DLL handles replicating the system database tables into the various company databases when needed.

A4wmtr.dll: This is our meter DLL for long running processes. It can either put up a meter dialog or just report back to the caller, the current status and percent complete. It also provides the API for cancelling long running processes.

Higher Level APIs

The next level are some of the DLLs that make up our Java, COM and .Net interfaces. There is a bit of complexity here due to how our previous web deployed system worked. Here we could communicate back to the server originally using DCOM and then later with .Net Remoting. The .Net Remoting layer provides both the communications layer for this web deployed mode and also acts as our .Net API. Depending on how you create your original session will configure which actual DLLs are used and which are calling conventions are used.

higherlevelapis

A4wapiShim.dll: This is the C side of our Java JNI layer. It talks to all the lower level DLLs to get its work done.

Sajava.jar: This is the Java side of our Java JNI interface. This allows Java programs to easily call Java classes to interface to our Business Logic Views. For more on this interface see this article.

A4wcomsv.dll: This is the main workhorse for the COM and .Net APIs. It does all the heavy lifting and interfacing to the core DLLs.

Accpac.Advantage.COMSVR.Interop.dll: This just performs the .Net to COM transition which is created by the MS tools.

Accpac.Advantage.Server.dll: Server side of the .Net API, handles the .Net Remoting requests if remotely called or just passes through otherwise.

Accpac.Advantage.Types.dll: Defines all the various types we use in our .Net API.

Accpac.Advantage.dll: This is the main external interface for our .Net API. For more on our .Net API see the series of articles starting with this one.

A4wcomexps.dll: Used when the VB UIs are going to talk .Net Remoting, this DLL is inside a4wcomex.cab.

A4wcomex.dll: The main entry point for the COM API.

Many More DLLs

There are many more DLLs in the Sage 300 runtime, but most of the others are for obsolete APIs  like the xapi, the older a4wcom COM API, the cmd API, the icmd API, etc. There are other important ones like to do with Database Setup, but these are the main ones used when you talk to the Business Logic through one of the main popular APIs.

Summary

For anyone interested this should give you a good idea of what the main DLLs in the runtime folder do. And give you an idea of how the various services in Sage 300 ERP are layered.

Written by smist08

February 8, 2014 at 5:19 pm

Sage 300 ERP Optional Fields

with 4 comments

Introduction

Optional Fields were added to Sage 300 ERP as the major feature for version 5.3A. They are a great way to add custom data fields to many master and transaction screens in Sage 300. They also have the benefit that we can flow them with the transactions through the system, for instance from an O/E Order to the corresponding A/R Invoice to the G/L Batch. This opens up a lot of power for tracking extra data in the system and to do sophisticated reporting based on it.

In this article we will look at some of the programming consideration when dealing with Optional Fields at the API level. We’ll use the Sage 300 .Net API to explore how to deal with a few things in that come up with Optional Fields.

Many programmers feel they can ignore Optional Fields and their program works perfectly for sample data, but as soon as they install it at a customer site, it fails. A common cause of this is that the customer has required optional fields that cannot be ignored and the API program needs to be updated. (Another common failure at customer sites is caused by not dealing with locked fiscal periods).

optionalfields

Ordered Header/Detail

From my previous article on the View Protocols, any optional field view is an ordered detail where its header is whatever it is an optional field for. An ordered header/detail means that you set the value of the key. But you do have to be careful to setup your optional fields correctly in the application’s setup UI since there is validation on these.

In the sample program I’ll give a complete set of steps. In some situations some of the steps can be skipped, for instance you don’t really need to call RecordClear and RecordGenerate for the header optional fields, but I’ll leave them in since sometimes you do, and it’s easier to just use a formula that always works rather than re-thinking everything each time.

Revision Lists

Since Optional Fields are often sub-details of details that are stored in revision lists, you sometimes need to be careful that things exist before using them. Revision Lists are our mechanism to store things in memory before they are written to the database in a single database transaction. So until you save the header, nothing is in the database and everything is in memory. The Revision List store the list of details that have been manipulated so far. The Optional Fields are themselves stored in Revision Lists until the big database transaction happens.

In the sample program we insert the detail before adding this optional fields. Until the detail is inserted there is nothing to attach the optional fields to, so we need to do that first. The Insert operation only adds the detail record to a revision list in memory, but once that is done we can add sub-details (ie Optional Fields) to it (which are also stored in memory).

Sample Program

For a sample program, I modified the ASP.Net MVC sample ARInvEntry to save a couple of optional fields for the header and a couple of optional fields for the detail. Below is a subset of the code to insert the detail optional fields:

   // 9. Insert detail.
   arInvoiceDetail.Insert();   // Insert the detail line (only in memory at this point).

   // 10. Add a couple of detail optional fields.
   arInvoiceDetailOptFields.RecordClear();
   arInvoiceDetailOptFields.RecordGenerate(false);
   arInvoiceDetailOptFields.Fields.FieldByName("OPTFIELD").SetValue("EXTWARRANTY", false);
   arInvoiceDetailOptFields.Fields.FieldByName("VALIFBOOL").SetValue(true, false);
   arInvoiceDetailOptFields.Insert();
   arInvoiceDetailOptFields.RecordClear();
   arInvoiceDetailOptFields.RecordGenerate(false);
   arInvoiceDetailOptFields.Fields.FieldByName("OPTFIELD").SetValue("WARRANTYPRD", false);
   arInvoiceDetailOptFields.Fields.FieldByName("VALIFTEXT").SetValue("180 Days", false);
   arInvoiceDetailOptFields.Insert();

   // 10.5 Register the changes for the detail.
   arInvoiceDetail.Update();

   // 11. Insert header. (This will do a Post of the details.)             
   arInvoiceHeader.Insert();

Different Field Types

If you look in the database you will see that the Optional Field tables don’t hold that many fields, but any Optional Field View has quite a few fields, but many are calculated. Optional Fields can be all sorts of different types, but in the database all the values are stored in a single VALUE field regardless. So there has to be a conversion to/from this text field and the real type. This is the job of all the VALIFtype fields. Basically you use the VALIF field based on the type of the Optional Field and then the View will handle the conversion to and from the type as stored in the VALUE database field. That is why we used the fields VALIFTEXT and VALIFBOOL above.

Auto Insert

In the sample program I just inserted the optional fields myself. However there is another way. Views that have optional fields usually have a field called PROCESSCMD (or something similar) where you can set a value with a title like “Insert Optional Fields” which will insert the optional fields that are needed. You can set this field and call Process on the View and it will insert these optional fields for you. Then you can read the optional field, set its value and update it. Some people find this an easier way to do things and you get all the optional fields with their default values as a bonus. (Note that this only applies to Optional Fields that are set to auto insert in the applications Optional Fields setup screen).

Similarly for Views that will transfer in Optional Fields (like from an Order to a Shipment), you will see PROCESSCMD’s like “Default and Transfer Optional Fields”. If you are doing API programming and want to preserve the flow of Optional Fields, you will occasionally have to set one of these and call Process.

When dealing with Optional Fields, it’s worth checking out the PROCESSCMD functions of the main View to see if they will help you do your job and save you a fair bit of coding.

Lookup Values

Many optional fields have a list of valid values. If you don’t set it as one of these you will get an error message to this effect. When dealing with optional fields make sure you have an error handler to show the errors after an exception as explained here. If you want to get at these values they are in CSOPTFD CS0012 (a detail of CSOPTFH CS0011). You can read through these views like any other as explained here.

Summary

Optional Fields are a powerful feature in Sage 300 ERP. Many customers use these in a fundamental way to support their businesses. This means that developers working in the Sage 300 world have to be cognizant of these and make sure their programs are compatible with their use.

Written by smist08

January 4, 2014 at 3:54 am

Using the Sage 300 .Net API from ASP.Net MVC

with 11 comments

Introduction

In this blog posting we are going to switch from using WinForms projects to an ASP.Net MVC project. ASP.Net MVC is Microsoft’s newest Web Development Platform. Be careful not to confuse ASP.Net MVC with ASP.Net. Both are still available and have similar version numbers, both are still part of Visual Studio, but they are quite different. The main problem with ASP.Net is that it doesn’t handle HTML5 web pages with lots of JavaScript well. Generally if you write a lot of JavaScript, the ASP.Net framework becomes pretty much useless. Microsoft then answered these complaints with ASP.Net MVC which is a more modern framework designed to build modern interactive web sites.

ASP.NET-MVC-4

I’ve now installed Visual Studio 2013, so all the projects going forwards are going to be stored in that format. As a result I created the sample application that goes with this article in Visual Studio 2013 using ASP.Net MVC version 5.

MVC versus WinForms

I’ve been using WinForms for the samples in this series of articles on the Sage 300 .Net API because WinForms makes it really easy to write simple programs. I just create a simple project, throw some controls on the form, double click on the control and write some code to do something. Very simple and easy. But for the end result I just get a Windows application that needs to be installed on a given computer along with the .Net framework. ASP.Net MVC is for writing Web applications and it is much more complicated to get started. But the end result is a scalable web application that can be running on a farm of web servers and the UI runs in any browser on the clients computer with nothing else installed except the browser. I can produce UIs just as rich as with WinForms, but now they need to be specified in HTML5/CSS and there are a number of additional capabilities that they get as a result. With WinForms I can create global variables (or leave objects instantiated for the duration of the program), and keep state easily and pass data around freely. With ASP.Net MVC each operation causes new objects to be created and nothing is remembered from call to call (unless I do something special). This way a farm of application servers could be handling the requests each working independently. With WinForms there is no particular structure to the programs I produced, In ASP.Net MVC we are dealing with a very definite MVC structure.

MVC

Model – View – Controller (MVC) is a design pattern for creating more robust and more maintainable user interfaces programs. It separates the concerns into three parts to produce a more modular design.

mvc3

I’m not going to go too much into the details of creating a program and setting these up, since there are dozens of really good tutorials on the Internet like this one. Rather let’s look at our example and talk about the various elements in a real setting.

Sample Program

For a sample program I ported the first WinForms application to ASP.Net MVC. This one is the ARInvEntry project. It basically puts up a web form where you enter the customer number and then hit the “Create Invoice” button and it creates an A/R Invoice for that customer. The main goal here is to get a starting project with a Razor View, a controller and then a Model that calls into the Sage 300 Business Logic using our .Net API.

Note that since Sage 300 ERP is a 32 Bit application, you must run this project as a 32 Bit application, which means if you run this through IIS, you must have the application pool used set to have 32 Bit mode enabled (probably the default application pool).

In this program the main web page is a razor view and it is connected to the model which allows the framework to provide data binding of the customer number into the model, so we don’t need to write any code to move the data from the browser into the model’s member variable. Inside Index.cshtml there is the statement:

@model ARInvEntry.Models.CreateInvoice

 

Which connects the View to the Model. Then the form is defined by:

        @using (Html.BeginForm("CreateInvoice", "Home")) {
        <p>
            Customer Number:
            @Html.TextBoxFor(model => model.CustomerNumber)
        </p>
        <p>           
            <input type="Submit" value="Create Invoice" />
        </p>

Notice that this code is a combination of HTML like <p> as well as the template helper functions like @Html.TextBoxFor (whose argument binds it to the model’s member variable). The Html.BeginForm command connects the “Create Invoice” button to the controller’s action method. “Home” is the name of the controller (which was generated by Visual Studio) and its code is in HomeController.cs. Then the first argument “CreateInvoice” is the method to call when the form is submitted (ie the button is pressed). All the @ tag template functions will be converted to pure HTML by the razor view engine when the page needs to be generated.

It may not look like there is a lot of glue holding things together here, that is because ASP.Net does a lot of things by convention, and so if you don’t specify something explicitly, it has a convention to find it. So generally if things have the same (or similar) names then they go together. This makes coding faster since you just follow some coding conventions and then all the attendant glue is provided automatically and you don’t need to worry about it. This convention over coding approach takes a bit of getting used to, but once you are, it makes development go quite a bit quicker.

In the controller (HomeController.cs) there is the action method which gets called when the user hits submit:

        public ActionResult CreateInvoice(Models.CreateInvoice crInvObj)
        {
            crInvObj.DoCreateInvoice();
            return RedirectToAction("Index");          
        }

Basically this is called with the model as a parameter where the data bound members will already be filled in. So all it has to do is call the method in the model to create the invoice.

It then returns redirecting us back to the main page.

The model has a public member CustomerNumber:

        public string CustomerNumber { get; set; }

 

which is used by the data binding support and populated automatically from the browser. Then there is the DoCreateInvoice method that contains the code from the previous project to create the invoice.

Limitations and Future Work

At this point, this is a very basic MVC project which just really establishes our framework for future work. There is no error reporting (except for writing to the VS output window). After you hit submit, we do the work and then return a complete page. We aren’t doing the searching we supported in the WinForms application.

For the error reporting we can’t just popup a message box from the Model when an error occurs. Generally all UI communications needs to be communicated from the model back to a View. We could add JavaScript to the View to get the response from the model to display a message box (JavaScript alert). We could add a new razor view for displaying error and status messages. We could provide a message area on the main form to fill in. We’ll look at implementing some of these in future articles.

Right now when the invoice is created, we simply refresh the home screen. This isn’t particularly efficient or elegant. Partly this is because of how the submit button is implemented as a form submit. What we would really like to do is to submit an Ajax web service request when the button is pressed and then just update select parts of the screen based on the reply.

If you choose to deploy this project to Azure as a PaaS application, you will run into problems due to the dependency on the Sage 300 .Net API which requires at least workstation setup be installed so we will need to talk about how we deploy a project of this nature to the cloud.

Summary

This article presents a very simple ASP.Net MVC application who’s Model communicates with Sage 300 via our .Net API. Now that we have this starting point, we can start developing some new elements to the project and explore various aspects of writing programs with ASP.Net MVC.

Written by smist08

November 23, 2013 at 9:40 pm

Using Browse Filters in the Sage 300 ERP .Net API

with 3 comments

Introduction

So far as we’ve explored the Sage 300 ERP .Net API we’ve either read rather specific records via their key segments or we’ve browsed through all the records in a table. But more generally applications are interested in processing through subsets of the entire table in an efficient manner. For instance to get all invoices for a given customer or all G/L Accounts for a specific region. In this article we are going to look at some of the mechanisms that we provide to do this.

Generally you browse through a set of records by calling the View’s Browse method to set a filter on the records and then you call the various methods like Fetch, GoTop, GoNext, etc. to iterate through the records. There are also the FilterSelect and FilterFetch routines to iterate through a selection of records. First we’ll talk about the actual filters, then we’ll return to the methods in the API that use them.

Browse Filters

Filters are strings that are passed as parameters to various methods in the API to restrict Views according to various criteria. This would be used when a user enters selection criteria to locate data.

The format of the string expression is:

expression ::= [(…] condition [)…] [Boolean-operator [(…] condition [)…]…]

where:

  • condition ::= field-name relational-operator operand
  • Boolean-operator ::= AND | OR
  • operand ::= <field-name | constant>
  • relational-operator ::= > | < | = | <= | >= | != | LIKE

An example of an expression is:

LASTNAME = “SMITH” AND AGE < 30 OR AGE > 40

where LASTNAME and AGE are field names within the View.

Brackets are allowed. Expressions are evaluated strictly from left to right unless brackets are put in. Thus, in the previous example, the expression is evaluated as:

(LASTNAME = “SMITH” AND AGE < 30) OR AGE > 40

This is true for SMITHs under 30 years of age, and for any person over the age of 40. Put in brackets if you want the other order. For example:

LASTNAME = “SMITH” AND (AGE < 30 OR AGE > 40)

returns only SMITHs, under 30 years of age or over 40.

All the relational operators work with all the field types in the expected way, except for Boolean fields, where only the = and != operators apply.

Note that both operands can be fields. If they are both fields, they must be the same type. When the expression is parsed, if the second operand is a constant, it is converted to the type of the first operand (which is always a field).

The filter mechanism handles all the field types, but since filters are strings, you have to know how to represent the various values as strings. Boolean fields are always TRUE or FALSE. Date fields are always formatted as YYYYMMDD. Time fields are formatted as HHMMSSHH (the last HH is hundredths of a second).

The use of white space around operators, field names and constants is necessary. If a constant is a string and contains white space then it must be enclosed in quotes. The quote character (“) can be embedded by using \” (no other replacements are performed).

The LIKE operator is like the = operator, except the second operand can contain the wild cards % and _, where:

  • % matches any group of characters.
  • _ matches any single character.

For example, LASTNAME LIKE “SM%” is true for any LASTNAME that begins with SM. LIKE can only be used with string field types.

Optimization

The internal mechanisms of our API will optimize the processing of filter strings to either convert them to SQL where clauses when processing the requests via a SQL database or will process them based on choosing the best key segment and traversing that key segment if it needs to do record based processing. Note that you should still pay attention of indexes when using SQL, SQL databases aren’t magical and do require good indexes when filtering on large tables, or the query could take quite a long time.

Using Filters

Sage 300 version 1.0A only ran on Btrieve as a database and the only methods that used filters were Browse and Fetch. You can still use these today and they are very useful methods. As the API evolved and we added more and more SQL support we added to these basic methods to help with performance and to make the API easier to use. The Fetch method behave very much like the Btrieve API behaves. The consequence of this is that it will sometimes do a get greater than equal to and sometimes do a get next. It will do the get GE (or get LE for going backwards) after a browse call or after one of the key segments has its value set. For UI programmers this was annoying since they are often changing the direction frequently and had to handle the case when Fetch would return the current record. To simplify this we added GoTop, GoNext, GoPrev and GoBottom. GoTop will get the first record, GoBottom the last and GoNext will always get the next record, never the current record, similarly for GoPrev. Generally I find these new methods far easier to use, and tend not to use Fetch anymore. Note that these Go methods are part of the COM and .Net APIs and are not part of the regular View API and so may not be available to other API sets, internally the .Net API translates these to the correct Fetch calls to do the job.

To aid in SQL performance we added FilterSelect which will actually issue a SQL query to retrieve a results set with the indicated records. Then FilterFetch browses through these records. For Pervasive.SQL these methods work under the covers just like Browse/Fetch. But note that you can’t update records that you retrieve this way. If you want to update such a record you need to call Read first. This is due to multi-user issues as well as the semantics of how SQL result sets work.

There are FilterCount which will return a count of the records that match the given filter and FilterDelete that delete’s the records that match the filter. Beware that these two functions will translate into SQL statements for SQL databases and will execute as fast as the database server can process them (which might not be fast if there aren’t indexes to exploit), and that Pervasive will iterate through the records to perform the operation. Typically you call FilterDelete only on header or flat type Views, and leave the header to call FilterDelete on the details for you.

Sample Program

In the sample program we add a search box where you can type a string and then the program fills the list box with any records where either customer number or customer name contains the string you typed. The sample does the search first using Browse/GoTop/GoNext and then does the same loop using FilterSelect/FilterFetch. Note that this search isn’t scalable, since the database will need to search the entire customer table looking for these substrings. But often SQL Server can do these sort of searches reasonably quickly as long as the data isn’t too big. Note that adding an index won’t help because we are looking for substrings. The more scalable way to do this would be via full text indexing or via a google like search tool.

Boolean gotOne;
string searchFilter =
     "IDCUST LIKE %" + SearchBox.Text + "% OR NAMECUST LIKE %"
     + SearchBox.Text + "%";

// Fill up the list box using Browse/GoTop/GoNext
ResultsList.Items.Clear();
arCus.Browse( searchFilter, true );

gotOne = arCus.GoTop();
while (gotOne)
{
    ResultsList.Items.Add(
         arCus.Fields.FieldByName("IDCUST").Value + " " +
         arCus.Fields.FieldByName("NAMECUST").Value );
    gotOne = arCus.GoNext();
}

// Go through the records using FilterSelect/FilterBrowse but
 // just print them to the console.
arCus.FilterSelect(searchFilter, true, 0, ViewFilterOrigin.FromStart);
while (arCus.FilterFetch(false))
{
     Console.WriteLine(arCus.Fields.FieldByName("IDCUST").Value +
          " " + arCus.Fields.FieldByName("NAMECUST").Value);
}

Summary

This was a quick introduction to Browse Filters in the Sage 300 ERP API. These form the basis for selecting sets of records and we will be using these extensively in future articles.

 

Written by smist08

November 9, 2013 at 1:10 am