Stephen Smith's Blog

Musings on Machine Learning…

Posts Tagged ‘business logic

Accessing Sage 300c’s Business Logic from the Web UIs

with 5 comments

Introduction

In the Sage 300 VB UIs, a user would do something in the UI (press a button or tab out of a field) and then the VB UI would be notified of this and would possibly execute a number of Sage 300 Business Logic (View) calls and based on their results update various other fields and possibly provide user feedback via a message box.

In the Web UIs we want to do similar processing since we want to re-use the tried and true Sage 300 Business Logic, but we have to be careful since now the Web UI is half running as JavaScript in the Browser and half running as .Net assemblies on the server. We have to be careful of the communication between the Browser and the server since there will be quite a bit of latency in each call over the Internet. Generally, we never want one user action to result in more than one call to the server (and ideally most user actions shouldn’t result in any calls to the server).

This blog post talks about where you put your code to access the Sage 300 Business Logic and how a UI interaction in the Browser flows through the system to execute this business logic.

Architecture

In the new Web UI architecture, we access the Sage 300 Business logic from our Business Repository classes. The base classes for these provide a wrapper of the Sage 300 .Net API to actually access the Views, but hiding the details of things like session and database link management. Then above this layer are the usual ASP.Net MVC Models and Controllers.

cna2arch

The Sage 300c Web UI Architecture

Generally, we want to put all this logic in the Business Repository so it can be used by multiple higher level clients including the Web UIs, our new RESTful WebAPI and services which are available for other applications to utilize.

Some of the layering is in place ready for additional functionality like customization. We need provide the common interfaces that can act as the basis for programmatic customization by inserting custom modules into the processing flow via Unity Interception.

Moving VB Code

In VB we often make lots of Business Logic (View) calls all interspersed with lots of interactions with various UI controls. This code has to be separated where the Business Logic (View) calls will go in the Business Repository which runs on the server and then the part that interacts with the controls has to move to the JavaScript code running in the Browser. The Business Repository has to provide the necessary data in a single payload which the mode/controller will transport to the Browser for processing.

The easiest way for the repository to transfer data is to have the model provide extra fields for this communication. This way no extra layers need to be involved, the business repository just populates these fields and the JavaScript layers pull them out of the returned JSON object and use them.

But you only want to add so much to the model, since you don’t want it to be too cumbersome to move around and you might want more focused calls. For these we usually define special calls in the controller and these go through a services layer to execute the code in the repository. The service call only passes the exact data needed (like parameters to a function) and knows what data to expect back.

Example

Adding extra fields to the model is fairly straight forward, so let’s trace through the logic of making a services call. In this example we’ll look at the simple case of checking a customer’s credit limit in A/R Invoice Entry (which is using a stateful business repository). We’ll start up in the JavaScript code and work our way down through the layers to get an idea of who does what.

So let’s start near the top. In the A/R Invoice Entry UI there are various times when the credit limit needs to be looked up. So the JavaScript code in the InvoiceEntryBehaviour.js file has a routine to initiate this process. Note that server calls are asynchronous so the response is handled in a callback function.

    showCreditLimit: function (result) {
        // Open Credit Check pop up window
        if (result) {
            var jsonResult = JSON.parse(result);
            if (jsonResult.ShowCreditCheck) {
                arInvoiceEntryRepository.getCreditCheck(jsonResult.id,
                    sg.utls.kndoUI.getFormattedDate(jsonResult.docDate),
                    sg.utls.kndoUI.getFormattedDate(jsonResult.dueDate),
                    "n" + invoiceEntryUI.CurrencyDecimals, jsonResult.totalPaymentAmountScheduled,
                    jsonResult.prepaymentAmount);
            } else {
                onSuccess.onCreditClose();
            }
        }
        invoiceEntryUI.ModelData.isModelDirty.reset();
    },

This calls a function in the InvoiceEntryRepository.js file to actually make the call to the server:

    getCreditCheck: function (customerNumber, documentDate, dueDate, decimals, invoiceAmount,
        prepaymentAmount) {
        var data = {
            id: customerNumber,
            docDate: documentDate,
            dueDate: dueDate,
            decimals: decimals,
            totalPaymentAmountScheduled: invoiceAmount,
            prepaymentAmount: prepaymentAmount
        };
        sg.utls.ajaxPostHtml(sg.utls.url.buildUrl("AR", "InvoiceEntry", "GetCreditLimit"), data,
              onSuccess.loadCreditLimit);
    },

This will initiate the call to the server. The URL will be built something like servername/Sage300/AR/InvoiceEntry/GetCreditLimit. The ASP.Net MVC infrastructure will use configuration by convention to look for a matching entry point in a loaded controller and hence call the  GetCreditLimit method in the InvoiceEntryController.cs file:

        [HttpPost]
        public virtual ActionResult GetCreditLimit(string id, string docDate, string dueDate,
               string decimals,decimal totalPaymentAmountScheduled, decimal prepaymentAmount)
        {
            try
            {
                return PartialView(AccountReceivable.ARInvoiceCreditCheck,
                      ControllerInternal.GetCreditLimit(id, docDate, dueDate, decimals,
                      totalPaymentAmountScheduled, prepaymentAmount));
            }
            catch (BusinessException businessException)
            {
                return JsonNet(BuildErrorModelBase(CommonResx.NotFoundMessage, businessException,
                    InvoiceEntryResx.Entity));
            }
        }

Which will call the InvoiceControllerInternal.cs GetCreditLimit method:

        internal ViewModelBase<CustomerBalance> GetCreditLimit(string customerNumber,
            string documentDate,
            string dueDate, string decimals
            , decimal totalPaymentAmountScheduled, decimal prepaymentAmount)
        {
            var creditBalance = Service.GetCreditLimit(customerNumber, totalPaymentAmountScheduled,
                 prepaymentAmount);

            if (creditBalance.CalcCustomerOverdue == CalcCustomerOverdue.Yes &&
                creditBalance.CustomerBalanceOverdue > creditBalance.CustomerAmountOverdue)
            {
                creditBalance.CustomerCreditMessage = string.Format(
                        InvoiceEntryResx.CustCreditDaysOverdue,
                        creditBalance.CustomerDaysOverdue,
                        creditBalance.CustomerBalanceOverdue.ToString(decimals),
                        creditBalance.CustomerAmountOverdue.ToString(decimals));
            }

            if (creditBalance.CalcNatAcctOverdue == CalcNatAcctOverdue.Yes &&
                creditBalance.NatAcctBalanceOverdue > creditBalance.NatAcctAmountOverdue)
            {
                creditBalance.NationalCreditMessage = string.Format(
                        InvoiceEntryResx.NatCreditDaysOverdue,
                        creditBalance.NatAcctDaysOverdue,
                        creditBalance.NatAcctBalanceOverdue.ToString(decimals),
                        creditBalance.NatAcctAmountOverdue.ToString(decimals));
            }

            creditBalance.CustomerCreditLimit =
                 Convert.ToDecimal(creditBalance.CustomerCreditLimit.ToString(decimals));
            creditBalance.CustomerBalanceVal =
                 Convert.ToDecimal(creditBalance.CustomerBalanceVal.ToString(decimals));
            creditBalance.PendingARAmount =
                 Convert.ToDecimal(creditBalance.PendingARAmount.ToString(decimals));
            creditBalance.PendingOEAmount =
                 Convert.ToDecimal(creditBalance.PendingOEAmount.ToString(decimals));
            creditBalance.PendingOtherAmount =
                 Convert.ToDecimal(creditBalance.PendingOtherAmount.ToString(decimals));
            creditBalance.CurrentARInvoiceAmount =
                 Convert.ToDecimal(creditBalance.CurrentARInvoiceAmount.ToString(decimals));
            creditBalance.CurrentARPrepaymentAmount =
                 Convert.ToDecimal(creditBalance.CurrentARPrepaymentAmount.ToString(decimals));
            creditBalance.CustomerOutstanding =
                 Convert.ToDecimal(creditBalance.CustomerOutstanding.ToString(decimals));
            creditBalance.CustomerLimitExceeded =
                 Convert.ToDecimal(creditBalance.CustomerLimitExceeded.ToString(decimals));
            creditBalance.NatAcctCreditLimit =
                 Convert.ToDecimal(creditBalance.NatAcctCreditLimit.ToString(decimals));
            creditBalance.NationalAccountBalance =
                 Convert.ToDecimal(creditBalance.NationalAccountBalance.ToString(decimals));
            creditBalance.NatAcctOutstanding =
                 Convert.ToDecimal(creditBalance.NatAcctOutstanding.ToString(decimals));
            creditBalance.NatAcctLimitLeft =
                 Convert.ToDecimal(creditBalance.NatAcctLimitLeft.ToString(decimals));
            creditBalance.NatAcctLimitExceeded =
                 Convert.ToDecimal(creditBalance.NatAcctLimitExceeded.ToString(decimals));

            return new ViewModelBase<CustomerBalance> { Data = creditBalance };
        }

This routine first calls the GetCreditLimit service in InvoiceEntryEntityService.cs:

        public virtual CustomerBalance GetCreditLimit(string customerNumber,
            decimal totalPaymentAmountScheduled, decimal prepaymentAmount)
        {
            var repository = Resolve<IInvoiceEntryEntity<TBatch, THeader,
                 TDetail, TPayment, TDetailOptional>>();
            return repository.GetCreditLimit(customerNumber,
                 totalPaymentAmountScheduled, prepaymentAmount);
        }

Who then calls the repository GetCreditLimit routine in InvoiceEntryRepository.cs. This routine then appears to do regular View processing using the base repository wrapper routines that insulate us from the session/dblink handling logic as well as do some basic error processing:

        public virtual CustomerBalance GetCreditLimit(string customerNum,
            decimal totalPaymentAmountScheduled, decimal prepaymentAmount)
        {
            _header.Read(false);
            _creditCheck.SetValue(CustomerBalance.Fields.CustomerNumber, customerNum);
            _creditCheck.SetValue(CustomerBalance.Fields.CurrentARInvoiceAmount,
                totalPaymentAmountScheduled);
            _creditCheck.SetValue(CustomerBalance.Fields.CurrentARPrepaymentAmount,
                prepaymentAmount);
            _creditCheck.Process();
            return _creditCheckMapper.Map(_creditCheck);
        }

Finally, down in the business repository, the code should look fairly familiar to anyone you has done any C# coding using our Sage 300 .Net API. Further this code should also appear somewhere in the matching VB code, and besides being translated to using the .Net API, its become quite separated from the UI control code (in this case the JavaScript).

At the end of this all the calls return propagating the returned data back to the Browser in answer to the AJAX call that it made.

It might look like a lot of code here, but remember the business repository and JavaScript bits have corresponding VB code. Then the other layers are there to make all the code more re-usable so that it can be used in contexts like WebAPIs and allow interfaces to provide the hooks needed for customization.

Summary

This article is intended to give you an idea of where to put your code that accesses the Sage 300 Business Logic and then how to call that from the Web UIs. There are a lot of layers but individually most of the layers are fairly simple and most of the code will appear in the Business Repository and the JavaScript behavior code.

Advertisements

Written by smist08

February 12, 2016 at 3:27 am

Starting to Program the Sage 300 ERP Views in .Net

with 26 comments

Introduction

Last time we used the Sage 300 ERP .Net Interface to open a session and create a database link to a Sage 300 ERP company. In this article we will start to investigate how to use the API to manipulate the Sage 300 ERP business logic. The individual business logic objects are known as Views (not to be confused with the Views in MVC or SQL Server Views). For a bit more background on the Views have a look at this article.

These business logic Views represent all the various objects in Sage 300 like G/L Accounts, A/R Customers or O/E Orders. There are also Views for doing processing operations like posting G/L batches or running I/C Day End. The nice thing about these Views is that they all share the same interface and this complete interface is accessible from the .Net API.  Although the API to each view is standard, sometimes you need to use several Views together to accomplish a task and there are about 5 protocols for how you use the Views together to accomplish something. But if you learn the API for one set of Views and learn the 5 protocols then you can do anything in any of the Sage 300 applications from any of the several hundred Views. Additionally you can utilize any Views created by third party ISV solutions.

Since the .Net interface is used by our VB UIs when they are running in the old 5.0A style web deployed, via the .Net Remoting option, you know that the .Net API is capable of performing any task that can be performed by a regular Sage 300 form.

As we proceed we’ll look into the various parts of the API in more detail, but for this article we’ll just look at how to get started and do some basic operations with data views.

Opening a View

To use a View, first we need to open it from the database link (DBLink). Doing this is quite simple:

ACCPAC.Advantage.View arCustView = mDBLinkCmpRW.OpenView("AR0024");

In this case we needed to add the “ACCPAC.Advantage” part to the definition of View, because there is a System.Windows.Forms.View and the compiler needs to know which one we mean. Unfortunately the word View is a bit over used in Computer Science which can lead to some confusion.

But what is this “AR0024” that we are opening? Where did that come from? In the Sage 300 world, all UIs and Views are uniquely identified by what is called a Roto ID which consists of two alphabetic characters followed by four decimal digits. Every Sage 300 SDK application whether written by Sage or an ISV must register a unique two letter prefix for their application with the DPP program. This then guarantees that two SDK modules won’t conflict with each other. Then the developer of the module (in this case A/R) assigns the numbers to all their Views and UIs. Sage’s convention is to start the Views at 0001 and to start the UIs at 1000.

So how do you know what to specify? There are several ways to figure this out.

  1. Use the Sage 300 ERP Application Object Model (AOM). Which is on our Web site here. From this site you can get a list of all Views for all the Sage applications along with any underlying database table structure. Using this site requires using Internet Explorer. You can’t use this for information on ISV applications.
  2. If you have the SDK then you can use the very helpful ViewDoc utility which is part of the SDK application (which you must activate inside Sage 300). A benefit of this is that you can get information on ISV applications that are installed on your system as well.
  3. Use macro recording. If you macro record a UI which uses the View you are after, then the macro recording will record the DBLink OpenView call with the roto view. Just note you need to change the syntax from VBA/COM to C#/.Net (which is fairly easy).
  4. The UI Info tool that is included with the core product can be used, but you need to first get the info on a UI that uses the View then drill down into the View by getting info on the data source.

After calling OpenView, your view object is ready to use, so let’s see some things we can do.

CRUD

CRUD stands for “Create, Read, Update and Delete”. Here we’ll look at reading and updating anyway.

When you open a View there is not data loaded. If we don’t know what record we want, one way to find out is to iterate through all the records or to just read in the first one. Calling GoTop will get the first record.

bool gotOne = arCustView.GoTop();

This function returns a bool to specify true if it returned a record and false if it didn’t. Most of the .Net API functions have simple return codes like this. These are usually the things you want to handle easily programmatically. If something else happens then this function will throw an exception, these could be things like a network connectivity errors or some bad SQL Server index corruption error. Today we’ll just handle the easy cases. In a future article we’ll look more at error handling and what to do when one of these methods throws an exception.

Now let’s iterate through all the records and print out the customer records (assuming the GoTop above was called first).

String custNum;
String custName;

while (gotOne)
{
    custNum = (String) arCustView.Fields.FieldByName("IDCUST").Value;
    custName = (String) arCustView.Fields.FieldByName("NAMECUST").Value;
    Console.WriteLine("Customer Number: " + custNum +
        " Customer Name: " + custName);
    gotOne = arCustView.GoNext();
}

If we got a record then get the customer number and customer name and write them to the console. Inside each View there is a collection of Fields. These usually include the database table fields along with some calculated fields. We’ll look at these in much more detail in a future article. For now this is how you get the various fields from the customer record. How do you know the strings “IDCUST” and “NAMECUST”? You find these the same way you find the Roto ID. The four methods mentioned above will also give you all the fields for each View. We had to cast the result to “String” because the field value is an object. The reason for this is that each field has a type like number, string, date or time and depending on the type will affect the object type. In this case we know these are both strings, so we can just tell the compiler that by setting the cast. If we got this wrong we’ll get an exception when we run. Again the four methods above will give you all the field types as well as some more useful information.

OK so that reads all the records, but what if we know which record we want and just want to read it? This is done as follows:

arCustView.Fields.FieldByName("IDCUST").SetValue("1200", false);
arCustView.Read(false);
custNum = (String) arCustView.Fields.FieldByName("IDCUST").Value;
custName = (String) arCustView.Fields.FieldByName("NAMECUST").Value;
Console.WriteLine("After Read, Customer Number = " + custNum +
     " Customer Name: " + custName);

Here we see how to set the key field for the customer record with the SetValue method of the field. The second parameter is verify which we’ll talk about another time, but for now this is fine set to false. This just determines if we should verify this field right away or wait for later.

Then we call Read to read the record. The parameter lock is set to false, which is nearly always the case (if you set it to true then you will get an exception and an error about needing to be in a transaction which we’ll talk about another time).

Then there is the code to get the field value and print them to the console. A bit of bad programming here with no error checking. Note that this will only work if there is a customer 1200 like in sample data.

Suppose now that we’ve read this record we want to update it? Well that turns out to be quite easy:

arCustView.Fields.FieldByName("NAMECUST").SetValue("Ronald MacDonald", false);
arCustView.Update();

Here we set the field “NAMECUST” to a new value and then call the Update method which has no paramters. You can then run Sage 300 and run the A/R customer’s screen, bring up customer 1200 and see that his name is in fact changed.

Summary

This was a quick introduction to the basics of how to access and use the Business Logic Views in Sage 300 ERP. All the API elements described apply to all the Views in Sage 300 ERP, so learning to manipulate on object goes a long way to proficiently manipulating all objects.

I’ve updated the sample application here as mentioned in this article.

Written by smist08

October 20, 2013 at 12:27 am

Sage 300 ERP Macros

with 4 comments

Introduction

We’ve had macros in Sage 300 ERP since version 1.0A. In the early days we used CABLE (the CA Basic Language Engine) as our macro language. This was a macro language version of CA-Realizer which we used for UI development back then. It was fun creating the development environment with debugging capabilities and such. Amazingly CABLE macros are still supported in Sage 300 ERP and if you run a CABLE macro (*.mac) you will get this environment:

cable1

With version 4.0A we introduced Visual Basic for Applications (VBA) as our macro language. We did this hand in hand with introducing our first COM interface a4wcom. This interface is still around, but generally we use the newer Sage 300 ERP COM interface a4wcomex.

vba

Why Macros?

We provide macros as a method of customizing the product which doesn’t require the SDK. While it does require programming, the Basic language used is simpler than say using C, Java, C# or C++. So hopefully more people can provide meaningful coded customizations that are largely upgrade safe. Generally VBA is a very powerful development environment and we’ve seen some amazing pieces of work implemented as macros. Plus VBA is the macro language used by Microsoft Office, so there are many technical resources, books, courses and such to help you with your development. Further you can use macro recording to help you with some starting code.

For many of our customers, the ERP package handles their financial accounting needs, due to regulations on using Generally Accepted Accounting Practices (GAAP) these are pretty standard. However especially in the operations modules a lot of businesses want custom calculations and procedures to more exactly match their particular business. Whether this is enforcing additional government regulations, implementing custom pricing models or whatever. These are very varied and we need to provide a powerful framework so that these can be accommodated whatever they may be.

At the same time we need these customizations to easily migrate from version to version so that customizations don’t then lock a customer into a particular version and prevent them from ever upgrading.

A powerful macro language with deep hooks into the product is an ideal way to accomplish these goals.

Business Logic

Both CABLE and VBA macros are fundamentally based on our Sage 300 ERP Business Logic Objects or Views. The API for all our Business Logic Objects is the same, so once you learn one, to some degree you learn them all. For a bit more info on our Business Logic, have a look at this blog posting. For an example of creating Orders have a look at this posting.

This can be a great mechanism for say importing data from an external system. VBA can access the API of the external system, extract the data and then feed it into our Business Logic to do things like import G/L Journal Entries or O/E Orders.

Using the VBA Forms capability you can create your own screens that interact with our business logic and perform your custom tasks. The VBA forms library/system is a very powerful but easy to use system for creating potentially quite sophisticated UIs.

The API to our business logic that the macros use is the same as the API used by our UIs, so you know that anything you can do in a UI, you can also do in a macro. It also guarantees that this layer is heavily tested and supported.

Generally the main interfaces to our business logic stays the same. As we add features we add fields, but as long as these aren’t required fields and you don’t need to use these features then your macro can remain the same from version to version.

UIs

With version 5.0A, we gave VBA the ability to customize our product’s User Interface Forms. We accomplished this by re-writing all our UIs from CA-Realizer to VB. The new VB UIs were created as ActiveX controls themselves and hence could be hosted on standard VBA forms. Then each UI contained a uniform set of methods, properties and events to all VBA macros to interact and customize them.

There is a little work to upgrade when you go from version to version. With each version the screen control gets a new class id, so you need to remove the old versions control reference and add the new one. Otherwise the code should remain compatible and continue to work. I documented how to do this in this blog posting.

Automating Processes

Another great use of macros is to automate recurring tasks. Besides business logic we give you full access to printing reports, including setting all the report parameters. These can be either Crystal Reports of Financial Reports. I blogged on printing through macros here and a bit more information on customizing reports here. Plus from the Business Logic you have access to all processing functions like Posting Batches or running Day End.

So you can write a macro to print out all your month end reports. You can write a macro to go through and process un-posted batches. Or whatever other recurring process you want automated.

Summary

Customization through macros is a powerful technology to personalize your ERP and to allow you to achieve greater efficiency. VBA is an industry standard macros language and gives you great power to customize Sage 300 ERP.

Opening Sage 300 ERP Sessions

with 30 comments

Introduction

Sage 300 ERP has a number of very flexible external APIs that allow programs to access all the business logic in the program. The business logic is stored in Views that are accessed via a standard API. To start using the business logic from one of our external APIs you first need to sign-on to the API and establish a session. This article is going to only talk about the AccpacCOMAPI which is our main COM API. Sage 300 ERP has an older COM API usually referred to as a4wcom, so be sure to use the newer one we are talking about here. Many of the concepts can be adapted to other APIs like the .Net or Java APIs. However to interact with other COM components like the session manager you must be using the AccpacCOMAPI. The examples in this posting will all be in Visual Basic 6.

This API has been around for a long time, but we recently received quite a few queries through customer support on establishing connections. So I thought it might be worth while writing a blog post on some of the use cases we try to support, some of the functionality that perhaps isn’t very widely known as well as the reasons for why some aspects work like they do.

For a bit more background on the Sage 300 business logic have a look at this blog posting.

Libraries

Sage 300 ERP’s COM API can be used by any tool that understands COM and how to talk to COM objects. The first step is to add the COM object to your project. In VB6 you do this by going to Project – References and adding “ACCPAC COM API Object 1.0”. In some tools you can browse to the DLL and add that, in this case you browse to where ever you installed Sage 300 ERP and then browse for runtime\a4wcomex.dll.

Creating and Initializing

Once you have the library available for you, now you need to get started. All objects in our COM API are created via APIs in our COM API. But first you need to get started by creating and initializing a session object. This is the root object and from this everything else is derived. In VB there are a couple of ways to create the initial session object either:

Dim mSession As New AccpacCOMAPI.AccpacSession

Or

Dim mSession As AccpacCOMAPI.AccpacSession
Set mSession = CreateObject(“Accpac.Session”)

Once you have a session object then you need to initialize it:

mSession.Init “”, “XY”, “XY1000”, “61A”

If you are accessing the COM API from an external program and not an SDK application then the parameters don’t matter. The first parameter is for when an SDK application is run from the desktop to connect them up properly and the other parameters are similarly for SDK application for other APIs like getting you applications help files correctly. Generally for an external application you just want these set with valid value so things will proceed. The application ID “XY” is reserved for non-SDK application to use, so you don’t have any risk of having things confused with a third party application. It is important that you call init before doing anything else. If you do call some other method first then expect to get strange error messages.

Below is the object model of all the objects you can get from an initialized session:

objectmodel

Company List

At this point we still haven’t signed into a company. At this point you can really just sign-on, but you can also get a list of companies that you can sign-on to. This is the API used by Sage 300 to build sign-on dialogs. In the session object is an organizations collection that you can traverse to get the information on the available companies.

For i = 0 To mSession.Organizations.Count – 1
Print mSession.Organizations.ItemByIndex(i).DatabaseID,
mSession.Organizations.ItemByIndex(i).Name
Next i

As you can see by the code, this API was invented by a C programmer and not a VB programmer.

Signing On

The main way you sign-on to a company is to use the open method.

mSession.Open “ADMIN”, “ADMIN”, “SAMLTD”, Date, 0, “”

The main thing you need for this method is the user id, password, company id and session date. After calling this, the next thing you usually do is create a database link and then from the database link create your view objects. Now you can call the views and use all the Sage 300 business logic. The disadvantage of this method is that you need to know the user id and password. But otherwise you are good to go.

Session/Signon Managers

Of course with what we have discussed so far you could create your own sign-on dialog. But why re-invent the wheel. The main Sage 300 ERP COM library is intended to be called from both user interface programs or server processes, as a result it has no user interface functions itself, it will never popup a messagebox or a dialog box. It is strictly processing and no UI.

However we do provide a number of other ActiveX controls that are intended to be used as UI components. Two of these are the Signon Manager and the Session Manager. You only interact with the Session Manager and then the Session Manager uses the Signon Manager whenever it needs it.

So if you don’t want to have to know the user id and password then you use the Session Manager to create your session for you and you get back a session that has been created, initialized and opened for you. The user will be able to enter their user id, password and select the company and session date to use for processing.

To use the Sesion Manager you need to add a reference for “ACCPAC Session Manager 1.0” or access the runtime\a4wSessionMgr.dll. Then you would write some code like:

Dim signonID As Long
Dim mSession As AccpacCOMAPI.AccpacSession
Dim sessMgr As New AccpacSessionMgr

sessMgr.AppID = “XY”
sessMgr.ProgramName = “XY1000”
sessMgr.AppVersion = “54A”
sessMgr.CreateSession “”, signonID, mSession

The intent of the session manager was to facilitate things like workflow management. So the first time someone accesses it, it will create new session and the user will get a signon dialog. However the next time it is accessed, you will just get back the session the user opened the first time. This allows applications to be strung together in a workflow type manner without each step requiring the user to sign-on. If you do want a fresh sign-on, you can set the ForceNewSignon property to true. If there are two desktops signed in and ForceNewSignon is false, then the user will get a dialog box to choose which session they want.

Summary

The external APIs to Sage 300 ERP are very powerful. Since the AccpacCOMAPI is used exclusively by our VB forms to access the Sage 300 business logic, you know that from this interface you can do anything that can be done from a regular UI. All business logic is exposed this way. So the intent of this posting was just to give you a little help in getting started to get at all that business logic.

Accpac’s Business Logic

with 17 comments

I thought I might spend a few blog postings talking about Sage ERP Accpac’s Business Logic. With all the talk about version 6’s new web based UIs, the Business Logic hasn’t been talked about much lately. However the Business Logic is still the heart and soul of Accpac. It contains all the difficult application logic that provides the true business value and ROI to the product. This blog posting will talk a bit about the architecture and an overview of Business Logic, then in future postings we’ll go into some of the details. Below is an architectural diagram of Accpac showing the Business Logic as the middle tier of a three tier architecture. The Business Logic talks to the Database layer through a database independent API layer (which we talked about here: https://smist08.wordpress.com/2010/07/10/accpac-and-it%E2%80%99s-databases/).

Then the various users of the Business Logic talk to it through a number of standard APIs that are part of the Accpac System Manager. Individual Business Logic objects are called Views (not to be confused with database views).

Encapsulates One Logical Entity and Its Operations

Each Accpac View encapsulates one logical entity and its operations. These entities are things like A/R Customer, G/L Accounts, O/E Order headers, etc. They don’t have to be physical objects, they can be abstract concepts like G/L Posting (the View that posts G/L batches). Even though these entities are very different, each and every View has exactly the same API. Within Accpac, rather than have a unique API for each object, where whenever you encounter a new object you need to learn a new API; every Business Logic Entity in Accpac has the same API. This means that if you learn how to do things with A/R Customers then you have also learned how to do the same things to A/P Vendors or G/L Accounts. This is also why there are many different types of APIs to access the Views. There is the .Net interface, two COM interfaces, several DLL interfaces and the forthcoming Java interface. Since the API is the same for every View, we just need to implement this interface in each technology which is reasonably easy to do.

Implemented as DLLs

Each View is implemented as a Windows DLL (or in our Linux port as a Linux Shared Object (SO)). These DLLs can be implemented in any language that can produce DLLs with the set of API functions that we specify. In the original CA-Accpac/2000 1.0B, G/L, A/R and A/P had their Views implemented in COBOL. Since then all our Views are written in C (sometimes with parts in C++). This is a fairly low level way to do things. It might be nice if we used a higher level language like Java, but then we couldn’t have our Views as DLLs. The benefit of DLLs (or SOs) is that they are the lowest level objects in the operating system and you can build anything on top of them. This is how we can have so many different APIs to access the Views, since anything has a mechanism to call DLLs, since you need to have this in order to talk to the operating system (which is just a set of DLLs (or SOs)). The Accpac SDK contains templates and tools to allow you to generate Views from tables that describe what the View needs to do. Then you fill in routines to code the real functionality. You can open multiple copies of the same View each with its own context, this way a View can be used in many ways at one time.

Standard Protocols

Views often don’t act in isolation. Often it takes the co-operation of several Views to accomplish a task. For instance there are two Views, one for the Order header and another for each detail line. For a given order there is one order header and then multiple details, each one for the purchase of an I/C Item or miscellaneous charge. To enter an order the order header and detail Views need to work together. They do this to ensure transactional integrity, multi-user correctness and that various totals are maintained correctly. To do this we have a set of View protocols. If you are entering documents through Header/Detail type Views then there are protocols that you need to follow. Following these well documented protocols (see the System Manager User Guide or SDK Programming Guide) will ensure things go smoothly. If you don’t follow the protocols than you will get strange errors and you transactions may be rejected. These protocols are followed to insert, update and delete documents.

Combine Logical and Physical Data

The lower level Views or Data Views are thin wrappers over various database tables. They expose the fields in the underlying table almost identically to what is documented in the data dictionary. A good many Views are of this type, exposing the physical data in the database. However they can also add some logical fields such as calculated fields, fields looked up from other Views (like perhaps a description for a G/L account) or anything else that will be useful to the user of this View. Sometimes you can set these logical fields to control how the View does calculations, such as setting whether you want taxes calculated automatically or you will manually enter them.

Avoids Duplication of Code

When a View needs data from a table that it doesn’t manager, it doesn’t just go to the database to get the data, it calls the View that manages that other table. This avoids duplication of code since the code for manipulating that object is in one place, namely its View. Then every View that needs that same data calls that View. In fact Views are the primary user of other Views and the majority of View calls are made by other Views rather than the external users of Views like macros or UIs. This also means the other Views can make use of all the logical fields added by the other Views as well as get at the physical data.

Maintain Database Integrity

Views are responsible for maintaining database integrity. We do not rely on the database server software to do this. Views will maintain field level integrity as data is put to the View records. If it’s something simple like a field needs to be upper case and the caller passed in lower case, it will simply convert the data to upper case as it comes in. This is to make the Views more resilient and friendlier to the View’s users. If it can simply fix data, it will. At the next level it will reject bad data, for instance if you enter an invalid currency code into a currency field, it will immediately reject it (actually there is a flag as to whether it immediately rejects it or waits for you to try to insert or update the record). At insert/update time the View will ensure there is record level integrity, any fields that need to be validated as a set together will be. It is also possible in a header/detail situation where data has to be cross validated across several records. This is all handled for you by the View. This is why we insist that anyone writing data into an Accpac company database, must do this through the Accpac Views. Writing data directly to the database through ODBC or ADO is very frowned on and can cause you to become unsupported.

Centralize Data Security

It might appear that Accpac controls data security through its UIs. This is because if you don’t have access to something then either an entire UI is hidden from you or if you don’t have access to a feature in a UI, then that feature is hidden. However the UI is asking the View if the person has access and then hiding the feature as an UI nicety. If the UI was badly behaved and didn’t hide the feature and tried to use that feature in the View then the View would reject it. This way security it maintained whether the data comes from an UI, a macro, import or from a third party application using one of our APIs.

Allow Distributed Application Architecture

Since the View API is well defined and contains a fixed set of functions for all Views, it allows us to easily serialize the calls. This is the process of taking the arguments for functions and putting them out as a serial set of bytes that can be transmitted over a communications link. This allows the caller to use an API that serializes the calls, sends the serialized stream over a communications link like the Internet, have the receiver interpret the serialized stream and make the View call. This enabled various remote APIs such as the old iConnect and Process Servers along with the newer DCOM and .Net protocols in use today. The new SData protocol works a bit differently since it is based on the standard REST protocol, which is already serialized as an URL and XML payload. This is translated into a number of View calls when received by the server.

Summary

Hopefully this gives an indication of some of the things Views do and they work. In future blog posts I’ll look at various aspects of the Views in more detail.

Written by smist08

September 11, 2010 at 8:14 pm