Stephen Smith's Blog

All things Sage ERP…

Posts Tagged ‘accpac

Unstructured Time at Sage

with 4 comments

Introduction

Unstructured time is becoming a common way to stimulate innovation and creativity in organizations. Basically you give employees a number of hours each week to work on any project they like. They do need to make a proposal and at the end give a demo of working software. The idea is to work on projects that developers feel are important and are passionate about, but perhaps the business in general doesn’t think is worthwhile, too risky or has as a very low priority. Companies like Google and Intuit have been very successful at implementing this and getting quite good results.

dilbert-google-20time

Unstructured Time at Sage

The Sage Construction and Real Estate (CRE) development team at Sage has been using unstructured time for a while now. They have had quite a lot of participation and it has led to products like a time and expense iPhone application. Now we are rolling out unstructured time to other Sage R&D centers including ours, here in Richmond, BC.

At this point we are starting out slowly with 4 hours of unstructured time a sprint (every two weeks). Anyone using this needs to submit a project proposal and then do a demo of working code when they judge it’s advanced enough. The proposals can be pretty much anything vaguely related to business applications.

The goal is for people to work on things they are passionate about. To get a chance to play with new bleeding edge technologies before anyone else. To develop that function, program or feature that they’ve always thought would be great, but the business has always ignored. I’m really looking forward to what the team will come up with.

so-many-toys-so-little-unstructured-time-new-yorker-cartoon

We are still doing Hackathons, Ideajams and our regular innovation process. This is just another initiative to further drive innovation at Sage.

Crazy Projects at Google

Our unstructured time needs to be used for business applications, but I wonder what unstructured time is like at Google where they seem to come up with things that have nothing to do with search or advertising. Is it Google’s unstructured time that leads to self-driving cars, Google Glasses, military robots, human brain simulations or any of their many green projects. Hopefully these get turned into good things and aren’t just Google trying to create SkyNet for real. Maybe we’ll let our unstructured time go crazy as well?

Anathem

I’m a big fan of Neal Stephenson, and recently read his novel Anathem. Neal’s novels can be a bit off-putting since they are typically 1000 pages long, but I really enjoy them. One of the themes in Anathem are monasteries occupied by mathematicians that are divided up into groups by how often they report their results to the outside world. The lower order reports every year, next is a group that reports every ten years, then a group that reports every 100 years and finally the highest group that only reports every 1000 years. These groups don’t interact with anyone outside their order except for the week when they report and exchange information/literature with the outside world. This is in contrast to how we operate today where we are driven by “internet time” and have to produce results quickly and ignore anything that can’t be done quickly.

So imagine you could go away for a year to work on a project, or go away for ten years to work on something. Perhaps going away for 100 years or 1000 years might pose some other problems that the monks in the novel had to solve. The point being is to imagine what you could accomplish if you had that long? Would you use different research approaches and methods than we use typically today? Certainly an intriguing prospect contrasting where we currently need to produce something every few months.

My Project

So why am I talking about Anathem and unstructured time together? Well one problem we have is how do you get started on big projects with lots of risk? Suppose you know we need to do something, but doing it is hard and time consuming? Every journey has to start with the first step, but sometimes making that first step can be quite difficult. I’ve had the luxury of being able to do unstructured time for some time, because I’m a software architect and not embedded in an agile sprint team. So I see technologies that we need to adopt but they are large and won’t be on Product Manager’s road maps.

So I’ve done simple POC’s in the past like producing a mobile app using Argos. But more recently I embarked on producing a 64-Bit version of Sage 300. This worked out quite well and wasn’t too hard to get going. But then I got ambitious and decided to add Unicode into the mix. This is proving more difficult, but is progressing. The difficulty with these projects is that they involve changing a large amount of the existing code base and estimating how much work they are is very difficult. As I get a Unicode G/L going, it becomes easier to estimate, but I couldn’t have taken the first step on the project without using unstructured time.

Part of the problem is that we expect our Agile teams to accurately estimate their work and then rate them on how well they do this (that they are accountable for their estimates). This has the side effect that they are then very resistant to work on things that are open ended or hard to estimate. Generally for innovation to take hold, the performance management system needs a bit of tweaking to encourage innovation and higher risk tasks, rather than only encouraging meeting commitments and making good estimates.

Now unlike Anathem, I’m not going to get 100 years to do this or even 10 years. But 1 year doesn’t seem so bad.

Summary

Now that we are adding unstructured time to our arsenal of innovation initiatives, I have high hopes that we will see all sorts of innovative new products, technologies and services emerge out of the end. Of course we are just starting this process, so it will take a little while for things to get built.

The Sage 300 System Manager Core DLLs

with 9 comments

Introduction

We hold a developer’s exchange (DevEx) every couple of weeks where one of our developers volunteers to present to all the other developers in our office. This past week I presented at the DevEx on what all the core DLLs in our Sage 300 runtime folder do. I thought this might be of interest for a wider audience so here are the gory details.

Architecture

Our marketing supplied architecture diagram is the following which highlights our three tiers and hide a lot of the details of how the object repository, APIs and supporting services are implemented. I’ve blogged previously on our Business Logic Views. In this article I’m going to go into more detail on all the DLLs that provide the framework to support all of this.

arch

Lower Level DLLs

If you are an ISV developing Sage 300 SDK applications or have worked for Sage on the 300 product then you will have had to encounter a number of these DLLs. I’m only looking at a subset of current DLLs, and I’m not looking at all the DLLs that support older technologies that are still present to maintain compatibility with add-ons.

lowleveldlls

I didn’t add arrows to this diagram since everything pretty well calls everything else below it. But segregated the DLLs a bit by how low or high level they are. So here is a quick synopsis of each one:

A4wcompat.dll: We created this DLL back when we did a native port of the Sage 300 Views for Linux. This DLL isolates operating system differences that need more than some clever #defines. A big part of this is the thread and process synchronization and locking support. Even though we never released the native Linux version, this isolation of operating system dependent parts had made adding multi-threading support, 64 bit support and Unicode support easier.

A4wmem32.dll: In 16 bit Windows, the built in memory management was really slow, so everyone used their own. Now this DLL uses the Windows and C default memory management, but is still important for global memory that needs to be shared across processes. Originally this was done through the data segment of a fixed DLL, but now is done through memory mapped files.

A4wlleng.dll: This is just a language DLL that holds some lower level error messages used by System manager.

A4wsqls.dll: This is the SQL Server database driver (there is also a4worcl.dll for Oracle and a4wbtrv.dll for Pervasive.SQL). This is dynamically loaded based on the type of database you are connecting to. For more on our database support see this article.

Cato3msk.dll, cato3dat.dll: The cato3 DLLs are the old CA common controls. We don’t use these in our UIs anymore, but cato3msk.dll provides our mask processing that is used by the Views. Similarly we don’t use this date control, but do use a routine here to format dates in error messages correctly.

A4wroto.dll: This handles the loading of the various View DLLs as well as the various UIs we’ve used in the past. It loads the roto.dat files and handles loading the right DLLs when View subclassing is going on or stub Views need to be used.

A4wsem.dll: This handles the locking of the semaphor.bin file. It allows processes to lock the company database, an application or the whole site. It also handles application specific cross workstation locking needs.

A4wrv.dll: This is the main DLL API entry point for the Views. It manages all the calling of the Views and handles other tasks like sending the calls for macro recording. For more on our View interfaces see this article.

A4wapi.dll: This is quite a hodge-podge of services for the Views like revision lists, error reporting and such. It also has support routines for the older CA-Realizer UIs. This is quite a big DLL and has most of our C level API in it.

A4wrpt.dll: This is our interface to Crystal Reports, it started as our interface to CA-RET then was converted to Crystal using their CRPE DLL interface, then converted to Crystal’s COM interface and now uses Crystal’s .Net Interface.

A4wprgt.dll: This DLL handles replicating the system database tables into the various company databases when needed.

A4wmtr.dll: This is our meter DLL for long running processes. It can either put up a meter dialog or just report back to the caller, the current status and percent complete. It also provides the API for cancelling long running processes.

Higher Level APIs

The next level are some of the DLLs that make up our Java, COM and .Net interfaces. There is a bit of complexity here due to how our previous web deployed system worked. Here we could communicate back to the server originally using DCOM and then later with .Net Remoting. The .Net Remoting layer provides both the communications layer for this web deployed mode and also acts as our .Net API. Depending on how you create your original session will configure which actual DLLs are used and which are calling conventions are used.

higherlevelapis

A4wapiShim.dll: This is the C side of our Java JNI layer. It talks to all the lower level DLLs to get its work done.

Sajava.jar: This is the Java side of our Java JNI interface. This allows Java programs to easily call Java classes to interface to our Business Logic Views. For more on this interface see this article.

A4wcomsv.dll: This is the main workhorse for the COM and .Net APIs. It does all the heavy lifting and interfacing to the core DLLs.

Accpac.Advantage.COMSVR.Interop.dll: This just performs the .Net to COM transition which is created by the MS tools.

Accpac.Advantage.Server.dll: Server side of the .Net API, handles the .Net Remoting requests if remotely called or just passes through otherwise.

Accpac.Advantage.Types.dll: Defines all the various types we use in our .Net API.

Accpac.Advantage.dll: This is the main external interface for our .Net API. For more on our .Net API see the series of articles starting with this one.

A4wcomexps.dll: Used when the VB UIs are going to talk .Net Remoting, this DLL is inside a4wcomex.cab.

A4wcomex.dll: The main entry point for the COM API.

Many More DLLs

There are many more DLLs in the Sage 300 runtime, but most of the others are for obsolete APIs  like the xapi, the older a4wcom COM API, the cmd API, the icmd API, etc. There are other important ones like to do with Database Setup, but these are the main ones used when you talk to the Business Logic through one of the main popular APIs.

Summary

For anyone interested this should give you a good idea of what the main DLLs in the runtime folder do. And give you an idea of how the various services in Sage 300 ERP are layered.

Written by smist08

February 8, 2014 at 5:19 pm

The Times They Are a-Changin’

with 7 comments

Introduction

Right now we have our nephew Ian living with us as he takes a Lighthouse Labs developer boot camp program in Ruby on Rails and Web Programming. This is a very intense course that has 8 weeks instruction and then a guaranteed internship of at least 4 weeks with a sponsoring company. A lot of this is an immersion in the current high tech culture that has developed in downtown Vancouver. This corresponds with myself working to expand the Sage 300 ERP development team in Richmond and our hiring efforts over the past several months. This article is then based on a few observations and experiences around these two happenings.

Sage 300 ERP has been around for over thirty years now and this has caused us to have quite a few generations of programmers all working on the product. Certainly over this time the various theories of what a high tech office should look like and what a talented programmer wants in a company has changed quite dramatically. As Sage moves forwards we need to change with the times and adopt a lot of these new ways of doing things and accommodate these new preferred lifestyles.

Generally people go through three phases of their career, starting single, no kids, renting to transitioning to married, home ownership and eventually kids to kids leaving home and considering retiring. Of course these days there can be some major career changes along the way as industries are disrupted and people need to retrain and reeducate themselves. Every office needs a good mix, to build a diverse, energetic and innovative culture, which has experience but is still willing to take risks.

Offices or No Offices

When I started with Accpac at Computer Associates, we were largely a cube farm perhaps not to dis-similar to the picture below.

cube_farm_9oy90q0wnv

The ambition was to have as much privacy as possible which usually translated to high cube walls, other barriers and the ambition to one day move into an office. At the time Microsoft advertised that on their campus every employee got an office, so they could concentrate and think to be more effective at their work. I visited the Excel team at this time and they had two buildings packed with lots of very small offices which led to long narrow claustrophobic hallways.

A lot has changed since then. Software development has much more adopted the Scrum/Agile model where people work together as a team and social interactions are very important. Further as products move to the cloud, the developers need to team up with DevOps and all sorts of other people that are crucial for their product’s success.

Now most firms adopt more open office approach. There are no permanent offices, everyone works together as a team.

High-Tech-City-2

There is a lot of debate about which is better. People used to more privacy of offices and cubes are loathe to lose that. People that have been using the open office approach can’t imagine moving back to cubes. Also with more people working a percentage of their time from home, a permanent spot at the offices doesn’t always make sense.

Downtown versus the Suburbs

When I started with CA the office was located in town near Granville Island. This was a great location, central, many good restaurants, and easily accessible via transit. Then we moved out to Richmond to a sprawling high tech park like many of the similar companies in the 90s. These were all sprawling landscapes of three story office buildings each one with a giant parking lot surrounding it. All very similar whether in Richmond, Irvine, Santa Clara or elsewhere.

Now the trend is reversing and people are moving back to downtown. Most new companies are located in or near downtown and several large companies have setup major development centers in town recently. Now the high tech parks in the suburbs are starting to have quite a few vacancies.

The Younger Generation

A lot of this is being driven by the twenty-something generation. What they look for in a company is quite different today than what I looked for when I started out. There are quite a few demographic changes as well as lifestyle changes that are driving this. A few key driving factors are:

  • The number of young people getting drivers licenses and buying cars is shrinking. There are a lot of reasons for this. But people who can’t drive have trouble getting to the suburbs.
  • People are having children later in life. Often putting it off until their late thirties or even forties.
  • City cores are being re-vitalized. Even Calgary and Edmonton are trying to get urban sprawl under control.
  • Real estate in the desirable high tech centers like San Francisco, Seattle or Vancouver is extremely expensive. Loft apartments downtown are often the way to go.
  • Much more work is done at home and if coffee shops.

This all makes living and working downtown much more preferable. It is also leading to people requiring less space and looking for more social interactions.

Hiring that Younger Generation

To remain competitive a company like Sage needs to be able to hire younger people just finishing their education. We need the infusion of youth, energy and new ideas. If a company doesn’t get this then it will die. Right now the hiring market is very competitive. There is a lot of venture capital investment creating hot new companies, many existing companies are experiencing good growth and generally the percentage of the economy driven by high tech is growing. Another problem is that industries like construction, mining and oil are booming, often hiring people at very high wages before they even think about post-secondary education.

What we are finding is that many young people don’t have cars, live downtown and are looking to work in a cool open office concept building.

We are in the process of converting our offices to a more modern open office environment. We do allow people to work at home some days. Maybe we will even be able to move back downtown once the current lease expires? Or maybe we will need to create a satellite office downtown.

Generally we have to become more involved with both the educational institutions by hiring co-op students and other interns. We need to participate in more activities of the local developer and educational community like the HTML500. We need to ensure that Sage is known to the students and that they consider it a good career path to embark on. Often hiring co-op students now can lead to regular full time employees later.

Since Sage has been around for a long time and has a large solid customer base, we offer a stable work environment. You know you will receive your next pay check. Many startups run out of funding or otherwise go broke. Often while the job market is hot, young people don’t worry about this too much, but as you get into a mortgage, this can become more important.

Summary

The times are changing and not only do our developers need to keep retraining and learning how to do things differently, but so do our facilities departments, IS departments and HR departments. Change is often scary, but it is also exciting and stops life from becoming boring.

Personally, I would much rather work downtown (I already live there). I think I will be sad when I give up my office, but at the same time I don’t want to become the stereotypical old person yelling at the teenagers to get off my lawn. Overall I think I will prefer a more mobile way of working, not so tied to my particular current office.

 

Written by smist08

February 1, 2014 at 5:50 pm

Branching by Feature

with 7 comments

Introduction

Now that we’ve released Sage 300 Online as well as the on-premise Sage 300 ERP 2014, we need to change the way we develop features going forwards. We would like to develop features and frequently deploy these to Sage 300 Online so that customers can take advantage of these as soon as possible. Plus we would like to start including more features in our Product Updates.

This means that we have two separate products that would like to release features out of the same source code on their own timelines.

This article describes some of the issues with doing this and describes some aspects of the procedures we are following. In some cases there are additional benefits and in some cases there is additional work.

Operating SaaSy

Generally we are switching from releasing big bang product versions every year or so, to releasing updates on a much frequent basis. I’ve blogged about this before in these articles: SaaSifying Sage and SaaSifying Accpac. This article is really just covering one aspect of this process, namely around how we now use our source control system. This is one aspect of a continuous deployment pipeline. It also involves the DepOps team.

Doneness Criteria

A key ingredient into making this all work is having good “Doneness Criteria”, so that when agile stories are completed they are fully done, including all testing, automation, documentation, etc. So that when a “done” story is included in the release it really works. The closer you can come to this the better. If you aren’t very close then you are going to need a lot of integration/regression testing steps between these minor small feature releases to ensure the quality of the product. An important aspect of this is to have good automated tests to find integration problems and reduce manual regression testing.

Source Code Branching

Source code control systems have the concept of branching, where you can create a branch for a feature and then go through the agile process developing the feature and when you are all done then you merge the branch back into the product. This is a very simple branching strategy which works great if you are developing one feature at a time. However in reality there is a lot of work going on simultaneously including multiple features being developed at once, sustainment work (bug fixing) and infrastructure work (like the 64 Bit version).

Modern source control systems like Subversion and Git provide very powerful features to easily create branches and to incorporate changes from other branches and finally merge branches back together. We’ve used Subversion for some time now and have a lot of source code and history stored here. But for new projects we’ve been creating them in Git, partly due to the more powerful branching features.

To work effectively there are usually quite a few branches, but hopefully not as many as on the fractal tree below.

fractal-tree2

So what sort of branches do we need? Potentially we could create a lot of branches in the Source Code. The main root of all the branches is called the trunk and we always want the trunk to be in a releasable state. Perhaps we can have a development branch, and a regression branch. Then from the development branch we fork off the individual features as separate branches. Then when we are happy with them we merge them into the regression branch for more testing and when this is competed they are merged into the trunk.

Generally this works well for one product, say a web product where the DevOps team controls what gets merged from the regression branch into the trunk. However when you have multiple products derived from the same source tree this can be quite complicated.

Another danger of too much branching is that the longer a branch lives, the further it can get from the trunk due to work done on other branches and then merging back into a main branch and the trunk gets more difficult. This can be alleviated by merging changes done elsewhere into your branch, so you have smaller merges along the way rather than a bit difficult merge at the end. Another advantage we have in Sage 300, is that it’s a very large product consisting of hundreds of DLLs, OCXs and EXEs. As a result different teams are often operating in quite different areas of the source code. However there will still be points of contention, one that is happening right now is multiple features being added to the reporting engine that is creating contention on that source code module.

So for a branching strategy we need to strike a compromise between keeping everything perfectly isolated and having a lot of gates for features to pass through versus keeping the management of the system down and reducing the difficulty in merging branches.

So our approach is to limit the number of branches. Trunk is the main branch that is released to production (i.e. customers). Nothing is developed directly on the trunk, everything needs to be developed on a feature branch. Before merging back into trunk, a feature must merge the trunk back into the feature first and the team developing the feature must do some testing (including running our full automated testing suite). Then the feature can be merged back into the trunk and the feature is available for either the Online or on-premise products to consume.

Consequences

Of course this has consequences on other components in the build pipeline. Now rather than just build the trunk with our set of build servers, we have to build all these branches. As part of the build we run a number of unit tests, but we also have a large set of automated tests that run on a separate set of servers, and now we want to run these automated tests on the branches.

Generally this increases the logistics of maintaining all these sets of servers. The build servers need to be configured on what branch to run and then the output of that build needs to be fed into the automated test servers to run all the tests.

Generally this all improves the quality and keeps the trunk in a releasable state since a lot of testing has been done before the feature is merged into the trunk.

For newer components like the Sage 300 Online home page and the Sage 300 ERP provisioning engine, these are built entirely using ASP.Net and then built and deployed using TeamCity. This simplifies the whole process and we can use a more sophisticated branching strategy in conjunction with DevOps where they have a release branch which only they control where they can have complete control on what they merge, build and deploy.

Summary

We’ve been putting this infrastructure in place for some time now and have it operating fairly smoothly. Now we will really start putting it into practice by releasing features frequently on two product lines. The main test that we’ve done it right is that it’s seamless to customers because we’ve been able to maintain high quality with all these processes and technologies in place.

Sage 300 ERP Optional Fields

with 4 comments

Introduction

Optional Fields were added to Sage 300 ERP as the major feature for version 5.3A. They are a great way to add custom data fields to many master and transaction screens in Sage 300. They also have the benefit that we can flow them with the transactions through the system, for instance from an O/E Order to the corresponding A/R Invoice to the G/L Batch. This opens up a lot of power for tracking extra data in the system and to do sophisticated reporting based on it.

In this article we will look at some of the programming consideration when dealing with Optional Fields at the API level. We’ll use the Sage 300 .Net API to explore how to deal with a few things in that come up with Optional Fields.

Many programmers feel they can ignore Optional Fields and their program works perfectly for sample data, but as soon as they install it at a customer site, it fails. A common cause of this is that the customer has required optional fields that cannot be ignored and the API program needs to be updated. (Another common failure at customer sites is caused by not dealing with locked fiscal periods).

optionalfields

Ordered Header/Detail

From my previous article on the View Protocols, any optional field view is an ordered detail where its header is whatever it is an optional field for. An ordered header/detail means that you set the value of the key. But you do have to be careful to setup your optional fields correctly in the application’s setup UI since there is validation on these.

In the sample program I’ll give a complete set of steps. In some situations some of the steps can be skipped, for instance you don’t really need to call RecordClear and RecordGenerate for the header optional fields, but I’ll leave them in since sometimes you do, and it’s easier to just use a formula that always works rather than re-thinking everything each time.

Revision Lists

Since Optional Fields are often sub-details of details that are stored in revision lists, you sometimes need to be careful that things exist before using them. Revision Lists are our mechanism to store things in memory before they are written to the database in a single database transaction. So until you save the header, nothing is in the database and everything is in memory. The Revision List store the list of details that have been manipulated so far. The Optional Fields are themselves stored in Revision Lists until the big database transaction happens.

In the sample program we insert the detail before adding this optional fields. Until the detail is inserted there is nothing to attach the optional fields to, so we need to do that first. The Insert operation only adds the detail record to a revision list in memory, but once that is done we can add sub-details (ie Optional Fields) to it (which are also stored in memory).

Sample Program

For a sample program, I modified the ASP.Net MVC sample ARInvEntry to save a couple of optional fields for the header and a couple of optional fields for the detail. Below is a subset of the code to insert the detail optional fields:

   // 9. Insert detail.
   arInvoiceDetail.Insert();   // Insert the detail line (only in memory at this point).

   // 10. Add a couple of detail optional fields.
   arInvoiceDetailOptFields.RecordClear();
   arInvoiceDetailOptFields.RecordGenerate(false);
   arInvoiceDetailOptFields.Fields.FieldByName("OPTFIELD").SetValue("EXTWARRANTY", false);
   arInvoiceDetailOptFields.Fields.FieldByName("VALIFBOOL").SetValue(true, false);
   arInvoiceDetailOptFields.Insert();
   arInvoiceDetailOptFields.RecordClear();
   arInvoiceDetailOptFields.RecordGenerate(false);
   arInvoiceDetailOptFields.Fields.FieldByName("OPTFIELD").SetValue("WARRANTYPRD", false);
   arInvoiceDetailOptFields.Fields.FieldByName("VALIFTEXT").SetValue("180 Days", false);
   arInvoiceDetailOptFields.Insert();

   // 10.5 Register the changes for the detail.
   arInvoiceDetail.Update();

   // 11. Insert header. (This will do a Post of the details.)             
   arInvoiceHeader.Insert();

Different Field Types

If you look in the database you will see that the Optional Field tables don’t hold that many fields, but any Optional Field View has quite a few fields, but many are calculated. Optional Fields can be all sorts of different types, but in the database all the values are stored in a single VALUE field regardless. So there has to be a conversion to/from this text field and the real type. This is the job of all the VALIFtype fields. Basically you use the VALIF field based on the type of the Optional Field and then the View will handle the conversion to and from the type as stored in the VALUE database field. That is why we used the fields VALIFTEXT and VALIFBOOL above.

Auto Insert

In the sample program I just inserted the optional fields myself. However there is another way. Views that have optional fields usually have a field called PROCESSCMD (or something similar) where you can set a value with a title like “Insert Optional Fields” which will insert the optional fields that are needed. You can set this field and call Process on the View and it will insert these optional fields for you. Then you can read the optional field, set its value and update it. Some people find this an easier way to do things and you get all the optional fields with their default values as a bonus. (Note that this only applies to Optional Fields that are set to auto insert in the applications Optional Fields setup screen).

Similarly for Views that will transfer in Optional Fields (like from an Order to a Shipment), you will see PROCESSCMD’s like “Default and Transfer Optional Fields”. If you are doing API programming and want to preserve the flow of Optional Fields, you will occasionally have to set one of these and call Process.

When dealing with Optional Fields, it’s worth checking out the PROCESSCMD functions of the main View to see if they will help you do your job and save you a fair bit of coding.

Lookup Values

Many optional fields have a list of valid values. If you don’t set it as one of these you will get an error message to this effect. When dealing with optional fields make sure you have an error handler to show the errors after an exception as explained here. If you want to get at these values they are in CSOPTFD CS0012 (a detail of CSOPTFH CS0011). You can read through these views like any other as explained here.

Summary

Optional Fields are a powerful feature in Sage 300 ERP. Many customers use these in a fundamental way to support their businesses. This means that developers working in the Sage 300 world have to be cognizant of these and make sure their programs are compatible with their use.

Written by smist08

January 4, 2014 at 3:54 am

Error Reporting in Sage 300 ERP

with 7 comments

Introduction

A very important aspect of all applications is how they handle errors and how they inform the end user about these errors. When everything is entered properly and people take what is called the “happy path” through the program then there is no issue. But end users will stray from the “happy path” and other circumstances can conspire to cause the need for informing the user of unusual circumstances.

In our previous blog postings on using the .Net API we have been deferring our discussion of error reporting, but now we are going to tackle it and add error reporting to the ASP.Net MVC sample program we started last week. In doing so we will introduce some new concepts including starting to use JQuery and Microsoft’s Unobtrusive Ajax.

Errors

Generally when we refer to errors in this articles we will also mean warnings and informational messages. Often you hit save on a form and if something is wrong then you get a message box telling you what was wrong (and maybe even what to do about it). However this is bit of an oversimplification. Sage 300 ERP is a three tier client server application. Many of the errors originate in the business logic or the database server. These may be running as Windows services and have no way of doing any user interaction. They can’t simply popup a message box. The error message must be transmitted to where ever the UI is running (say in a web browser somewhere on the Internet) and displayed there in a nice form that fits in with the general design of the form.

Further there may be more than one error. It is certainly annoying to have error messages popup one at a time to be answered and it is also very annoying to Save have one error message appear and correct the thing wrong, hit save again and then have a further thing wrong and so on.

To solve these problems we collect errors, warnings and messages up into a collection which is maintained during processing and forwarded up to the higher levels when processing is complete. We see this in the .Net API with the Errors collection on the Session object. As processing proceeds any business logic component can add messages to this collection. This collection can then be processed by the UI and cleared after it has reported the errors.

All the actual error messages referenced from the business logic are stored in Windows resource files and one of these is provided for each language. So the API used by the Business Logic will access the correct language resource file for the language of the Sage 300 user associated with the session object.

Inside the collection, each error has a priority such as Severe Error, Error, Security, Warning or Message. This way we can decide further how to display the error. Perhaps have the error dialog title bar red for errors, blue for warnings and green for messages.

Exceptions

Inside our .Net API we will return a return code for simple things that should be handled by the program. For instance if we go to read a record, we return false if it doesn’t exist, true if it does. But for more abnormal things we will throw an exception. You should always catch these exceptions, since even if you are debugging a program, these messages can be very helpful.

Just because an exception is thrown, doesn’t mean there is an error on the error stack. The exception might be because of some other exception, say from the .Net runtime. Because of this you will see in our exception handler, that if there is no error from Sage 300 then we display the error that is part of the exception (perhaps from .Net).

Sample Program

To add error reporting to our ASP.Net MVC sample programming we are going to start introducing how to program richer functionality in the browser. As we left off you just hit submit, the processing was done and the page was refreshed completely. This is rather old school web programming and a bit better user interaction is expected these days. Now I’ve updated the sample ARInvEntry sample (located here) to do an Ajax request rather than a page submit request. As a result the page isn’t completely redrawn and we can do a bit more processing. In this case we will use JQuery to put up a dialog box with any messages, warnings or errors that were encountered. Normally when you save an Invoice there is no success message displayed, but here we will put up a success message as well.

First off now that we are programming in JavaScript in the Browser, this is an interpreted environment which means there is no compiler to catch dumb programming mistakes and typos. One thing is to watch the syntax highlighting in the editor and use intelli-sense to avoid some typos. The other things is that we are passing data structures from C# on the server to JavaScript on the client. Which means there is no validation that C# and JavaScript have the same understanding of the object. Further JavaScript tends to silently ignore errors and keep on going, so you can be rather mystified when things don’t work. Some good tools to look for errors are Firebug and Fiddler. Trying different browsers can help also.

First I changed the page submit call to an Ajax call using Microsoft’s unobtrusive JavaScript library.

        @using (Ajax.BeginForm("CreateInvoice", "Home", null,
             new AjaxOptions { OnSuccess = "showResult" }))

Now the request will be made with Ajax and any returned data will be passed to the showResult JavaScript function. Pretty easy change, but there is one major missing piece. We must include the JavaScript library for this. The JavaScript file for any library needs to be added to the Scripts section of the project and then an entry needs to be added to BundleConfig.cs. Here we added jquery.unobtrsive-ajax.js to the jquery bundle.

     bundles.Add(new ScriptBundle("~/bundles/jquery").Include(
         "~/Scripts/jquery-{version}.js",
         "~/Scripts/jquery-ui-{version}.js",
         "~/Scripts/jquery.unobtrusive-ajax.js"));

We also added the jquery-ui-{version}.js. The framework will figure out which version to fill in from the js files in the scripts folder. This way we can update these without having to change any C# code. If we didn’t add the unobtrusive library then nothing would happen and we wouldn’t get any errors because nothing would be hooked up to fail. When having trouble with JavaScript always check you have the correct libraries added. Similarly we have added the css and images for JQuery to the project. You could put these in script tags in your HTML, but the bundles help with eventual deployment as we’ll see in a later article. These bundles are referenced in the cshtml files to actually include them in the generated HTML and one gotcha is that sometimes the order of including them matters due to namespace conflicts, for instance the bootstrap bundle must be placed before the jQuery bundle or something in JQuery won’t work due to some naming conflicts.

Now we add the showResult function and what it calls.

function showResult(data)
{
    $.fn.showMessageDialog(data.Warnings, data.Errors, data.Messages);
}
(function ($) {
    $.fn.showMessageDialog = function (warnings, errors, messages) {
        $("#infoWarningsBlock").hide();
        $("#infoSuccess").hide();

        if ((warnings != null && warnings.length > 0) ||
            (errors != null && errors.length > 0) ||
            (messages != null && messages.length > 0)) {

            var text = "";
            var i;
            if (messages != null && messages.length > 0) {
                $("#infoSuccess").show();
                for (i = 0; i < messages.length; i++) {
                    text = text + "<li>" + messages[i] + "</li>";
                }
            }
            $("#infoSuccess").html(text);
            text = "";
            if (errors != null && errors.length > 0) {
                $("#infoWarningsBlock").show();
                for (i = 0; i < errors.length; i++) {
                    text = text + "<li>" + errors[i] + "</li>";
                }
            }
            if (warnings != null && warnings.length > 0) {
                $("#infoWarningsBlock").show();
                for (i = 0; i < warnings.length; i++) {
                    text = text + "<li>" + warnings[i] + "</li>";
                }
            }
            $("#infoWarnings").html(text);

            $("#informationDialog").dialog({
                title: "Information",
                modal: true,
                buttons: {
                    Ok: function () {
                        $(this).dialog("close");
                        $(this).dialog("destroy");
                    }
                }
            });
        }
    }

}(jQuery));

Besides setting up the HTML to display, this code mostly relies on the JQuery dialog function to display a nice message/error/warning dialog type box in the Browser using an iFrame.

This routine requires a bit of supporting HTML which we put in _Layout.cshtml file so it can be shared.

    <div id="informationDialog" style="display: none;">
        <ul id="infoSuccess" style="display: none;"></ul>
        <div id="infoWarningsBlock" style="display: none;">
            <div id="infoWarningsHeader">
                <h4>Warnings/Errors</h4>
            </div>
            <ul id="infoWarnings"></ul>
        </div>
    </div>

Notice it has display set to none so it is hidden on the main page.

From the C# code on the server here is the error handler:

        //
        // Simple generic error handler.
        //
        private void MyErrorHandler(Exception e)
        {
            if (session.Errors == null)
            {
                Errors.Add(e.Message);
                Console.WriteLine(e.Message);
            }
            else
            {
                if (session.Errors.Count == 0)
                {
                    Errors.Add(e.Message);
                    Console.WriteLine(e.Message);
                }
                else
                {
                    copyErrors();
                }
            }
        }

        private void copyErrors()
        {
            int iIndex;

            for (iIndex = 0; iIndex < session.Errors.Count; iIndex++)
            {
                switch (session.Errors[iIndex].Priority)
                {
                    case ErrorPriority.Error:
                    case ErrorPriority.SevereError:
                    case ErrorPriority.Security:
                    default:
                        Errors.Add(session.Errors[iIndex].Message);
                        break;
                    case ErrorPriority.Warning:
                        Warnings.Add(session.Errors[iIndex].Message);
                        break;
                    case ErrorPriority.Message:
                        Messages.Add(session.Errors[iIndex].Message);
                        break;

                }
                Console.WriteLine(session.Errors[iIndex].Message);
            }
            session.Errors.Clear();
        }

It separates the errors, warnings and messages. Also the copyErrors method was separated out so it can be called in the case of success, in case there are any warnings or other messages that should be communicated.

The controller now just passes the variables from the model back to the web browser.

        public JsonResult CreateInvoice(Models.CreateInvoice crInvObj)
        {
            ResultInfo results = new ResultInfo();

            results.Messages = crInvObj.Messages;
            results.Errors = crInvObj.Errors;
            results.Warnings = crInvObj.Warnings;

            crInvObj.DoCreateInvoice();

            return Json(results);      
        }

All the work here is done by the Json method which translates all the C# objects into Json objects. It even translates the C# list collections into JavaScript arrays of strings. So for the most part this handles the communication back to the Browser and the JavaScript world.

error2

Summary

We discussed the Sage 300 ERP error reporting architecture and saw how to integrate that into our ASP.Net MVC sample program. Since this was our first use of Ajax and JQuery, we had a bit of extra work to do to set all that up to operate. Still we didn’t have to write much JavaScript and the framework handled all the ugly details of Ajax communications and translating data between what the server understands and what the browser understands.

Follow

Get every new post delivered to your Inbox.

Join 262 other followers