Stephen Smith's Blog

All things Sage ERP…

Posts Tagged ‘sage erp accpac

Launching Non-SDK Programs From the Desktop

with 9 comments

launch

Introduction

When SDK applications are launched from the Desktop, they are passed an object handle that they can use to create a session that exactly matches the session of the desktop that launched them. Further the desktop can manage these programs and for instance put up an error message if you try to close the Desktop while they are still running.

Quite a few people create programs that aren’t written using our SDK, but still are tightly integrated to Sage 300 ERP via one of our APIs such as the COM API, .Net API or Java API. You can add arbitrary EXE programs to the Desktop as icons and launch them just like any other screen.

When we designed the current UI framework, we had the intention that our UIs could be run from many places, such as VBA macros or hosted inside Internet Explorer. We also envisioned them being strung together in workflow type applications. Towards this we created the Session Manager and the Signon Manager to help tie together programs running inside the desktop with programs running outside the desktop. For information on using the Session Manager, check out this blog posting.

Generally this has worked quite well. Especially if you only signon to one company, then all the various things running will share the same session and you won’t have to signon to everything separately. However there are a few limitations to our current approach. If there are two desktops running (usually signed on to different companies) then when the external program is run, it has to present a dialog to choose which company’s session to use. This makes sense if you say start an EXE program from the start menu, since how would it know which desktop session to use? But when you run from the desktop you would expect it to just use that desktop’s session, rather than being treated like it wasn’t run from the desktop. Similarly if you started an EXE program from the desktop you would expect the Desktop to prevent closing until this program is closed, right now this check is only for SDK programs run from the desktop and doesn’t apply to anything else.

A lot of people are creating standalone EXE’s that use our APIs and they are using the Session Manager, which generally works well, but would like to get the other cases handled as nicely. So for the upcoming Sage 300 ERP 2014 release we have added some support to the Desktop to help with this. To basically allow non-SDK programs to start with the same protocol as the SDK programs so they can behave in the same manner. The alpha (developer) early release of Sage 300 ERP 2014 is available to ISVs so you can try this out now.

New Macro Substitution

If you add a program to a groupfile or if you add a program via file new in the desktop, if you specify $objecthandle$ as an argument, then this will be translated into an object handle when you are run that you can use in other API calls to get a session that matches your desktop. For instance in a group file:

[ PROGRAM ]
ID = "XX0500"
PARENT = "XX0000"
DESCRIPTION = "test ojbect key"
CMDLINE = "c:\\accpac6\\sm\\testobj\\testobj.exe $objecthandle$"
RSC = "XXSDK"

 

This is then the token you can use to create your session exactly from the Desktop that ran you.

If you aren’t an SDK program, you probably don’t have your own group file. However you can use a couple of SDK tools to add your item to an existing group file. The program unccgrp.exe (usually installed in c:\pluswdev\bin) will de-compile a group file (these are usually grp.dat files in a programs language folder, like ar62a\eng\grp.dat). Then you can add your own entries to this and then use ccgrp.exe to re-compile the group file. This is a bit of a kludge because it may overwritten by Product Updates and two people trying to do this at once may collide and interfere with each other. But it can be an effective and useful technique.

How to Use the Object Handle

You can then pass the object handle into a session init call and your session will be configured to match the desktop you are run from. Additionally if you want to register your windows handle, you can use the roto api to do so given the object handle. You should also clear this when you terminate.

Below is a VB program which does these things. It uses the session it gets to display the company name in a label. This way it will be connected to the right Desktop and the Desktop knows when it’s running.

Private Declare Sub rotoSetObjectWindow Lib "a4wroto.dll" (ByVal objectHandle As Long, ByVal hWnd As Long)
Dim strObjectHandle As String

Private Sub Command1_Click()
    Unload Me
    '
    ' clear our window handle when closing so we don't block the desktop closing
    '
    rotoSetObjectWindow Val(strObjectHandle), 0
End Sub

Private Sub Form_Load()
    Dim mSession As New AccpacCOMAPI.AccpacSession

    '
    ' Get the object handle from the command line
    '
    strObjectHandle = Command$

    MsgBox "Object Handle = " + strObjectHandle

    '
    ' Use the object handle to intialize the session. With this you will inherit
    ' an open session matching the desktop that launched you.
    '
    mSession.Init strObjectHandle, "XY", "XY1000", "62A"

    '
    ' Set the window handle so the desktop can track whether you have closed
    ' effectively doing a roto openok.
    '
    rotoSetObjectWindow Val(strObjectHandle), Me.hWnd

    Dim mDBLinkCmpRW As AccpacCOMAPI.AccpacDBLink
    Set mDBLinkCmpRW = mSession.OpenDBLink(DBLINK_COMPANY, DBLINK_FLG_READWRITE)

    Dim CSCOM As AccpacCOMAPI.AccpacView
    Dim CSCOMFields As AccpacCOMAPI.AccpacViewFields
    mDBLinkCmpRW.OpenView "CS0001", CSCOM
    Set CSCOMFields = CSCOM.Fields

    CSCOM.Fetch
    Label1.Caption = CSCOMFields("CONAME").Value

End Sub

Sage 300 ERP Desktops Through the Ages

with 7 comments

Introduction

With our upcoming 2014 version of Sage 300 ERP (to be released in 2013), one of the features is an improved look to our Windows Desktop Launcher. I thought it might be fun to do through a bit of history and review all the various Desktops we’ve included with Sage 300 ERP.

16-Bits

The first Windows version of Sage 300 was for the 16 Bit Windows 3.1 system. In these earlier versions of Windows, there was no ability to have folders in your lists of icons. Having nested folders of icons was a big feature of IBM’s OS/2 Desktop. IBM released this icon/folder system as a custom control for Windows. We took the IBM library and used it as a basis for our original desktop shown below:

desktop30

Here you could have folder (like Bank or Tax Services) which drilled down to further Windows. Basically using the innovative OS/2 technology of the day. Otherwise this desktop was implemented as a standard Windows MDI application written in C. This was the desktop used exclusively for versions 1.0A to 3.0A. It was also used for the 16-bit versions of 4.0A and 4.1A.

32-Bit

IBM never produced a 32-Bit version of their OS/2 control library for Windows. As a result we had to create a new desktop for our 32-Bit version, which we did using the Microsoft MFC framework. This is now in the age of Windows 95 where Microsoft now has nested icons and tree controls as introduced in the Windows 95 file explorer. These new controls are easy to use from MFC and we created a new desktop using these written in C++. This desktop has been included in all our versions since 4.0A. With 4.2A we dropped the 16-Bit version and this became our only desktop for a short while.

desktop32

The toolbar and status bar are standard MFC application features. The licensing pane is created by embedding an Internet Explorer ActiveX control.

The Web

With version 5 we introduced our first web version. This was based on building our UIs in VB6 and compiling them as ActiveX controls. We could then run them in an Internet Explorer window. The controls would downloaded and installed automatically and they would communicate back to the server using DCOM or .Net Remoting.

But now we needed a way to select and launch the UIs this way, so we created and ASP application to do this. This was our Web Desktop.

webdesktop

One feature of this desktop is that since we are now Web, we aren’t limited by the usual Windows icons sizes, so all the “icons” are actually 80×80 pixel bitmaps which look quite a bit better than on the standard Windows desktop.

Of course the previous 32-Bit desktop is still in the product for the majority of people that aren’t running web deployed. As it turns out this desktop was never used much because the majority of people that used our web deployment mode only used it from Sage CRM and as a result those UIs were run by CRM rather than our Web Desktop.

The Sage Desktop

Shortly after our acquisition by Sage there was a companywide initiative to standardize the desktop/launcher program across all Sage products. This was the Sage desktop.

sagedesktop

This desktop was written in C# as a standard .Net WinForms program. It included things like desktop notifications and news from Sage. Had a modern look and supported most things expected in the desktop. However Sage 300 ERP and CRE were really the only applications in Sage that adopted this and eventually development was discontinued. This desktop was included in versions 5.5A and 5.6A.

The Sage 300 Web Portal

With version 6.0A we introduced the new Web Portal which included data snapshots and a data inquiry tool in addition to the ability to launch screens.

oriondesktop

For more information on this Portal, have a look at this blog posting.

Updated Windows Desktop

With the forthcoming release of Sage 300 ERP 2014 we will be updating the standard Windows desktop that we introduced back in version 4.0A with an updated look. We have all new icons for both the programs and for the toolbar. Plus a new look for the toolbar.

desktop62

This gives the standard Sage 300 desktop an updated look. Especially by refreshing all those icons that have been in the product since version 1.0A.

Summary

A lot of attention gets spent on the Desktop/Launcher program since it is usually people’s first impression of running our program. Although most people do most of their work in a program like Order Entry, it’s still good to keep improving the Desktop. Looking back at the desktop’s you can see the influences of the various technologies that were popular at the time.

Written by smist08

June 16, 2013 at 10:28 pm

Windows Bit-Rot

with 7 comments

Introduction

In investigating some performance problems being reported on some systems running Sage 300 ERP, it lead down the road to investigating Windows Bit-Rot. Generally Bit-Rot refers to the general degradation of a system over time. Windows has a very bad reputation for Bit-Rot, but what is it? And what can we do about it? Some people go so far as to reformat their hard disk and re-install the operating system every year as a rather severe answer to Bit-Rot.

Windows Bit-Rot is the tendency for a Windows system to get slower and slower over time. Becoming slower to boot, taking longer to log-in, and taking longer to start programs. Along with other symptoms like excessive and continuous hard disk activity when nothing is running.

This blog posting is going to look at a few things that I’ve run into as well as some other background from around the web.

Investigation

I needed to investigate why on some systems printing Crystal reports was quite slow. This involved software we have written as well as a lot of software from third parties. On my laptop Crystal would print quite slowly the first time and then would print quickly on subsequent times. My computer is used for development and is full of development tools, so the things I found here, might be relevant to myself more than real customers. So how to see what is going on? A really useful program for seeing what is going on is Process Monitor (procmon) from Microsoft (from their SysInternals acquisition). This program will show you every access of the registry, the file system and the network. You can filter the display, in particular you can filter to monitor only a single program to see what it’s doing.

procmon

ProcMon yielded some very interesting results.

The Registry

My first surprise was to see that every entry in HKEY_CLASSES_ROOT was read. On my computer which has had many pieces of software installed, including several versions of Visual Studio, several versions of Crystal Reports and several versions of Sage 300 ERP, the number of classes registered here was huge. OK, but did it take much time? Well the first time something that’s run that does this it seems to take several seconds, then after this its fast probably because the registry ends up cached in memory. It appears that several .Net programs I tried do this. Not sure why, perhaps just .Net wants to know all the classes in the system.

But this does mean that as your system gets older and you install more and more programs (after all why bother un-installing when you have a multi-terabyte hard drive?), starting these programs will get slightly slower and slower. So to me this counts as Bit-Rot.

So what can we do about this? Un-installing unused programs should help, especially if they use a lot of COM classes. Visual Studio being the big one on my system, followed by Crystal and Sage 300. This helps a bit. But there are still a lot of classes there.

Generally I think uninstall programs leave a lots of bits and pieces in the registry. So what to do? Fortunately this is a good stomping ground for utility programs. Microsoft used to have RegClean.exe, Microsoft discontinued support for this program, but you can still find it around the web. A newer and better utility is Ccleaner from Piriform. Fortunately the free version includes a registry cleaner. I ran RegClean.exe first which helped a bit, but then ran Ccleaner and it found quite a bit more to clean up.

Of course there is danger in cleaning your registry, so it’s a use at your own risk type thing (backing up the registry first is a good bet).

At the end of the day all this reduced the first time startup time of a number of program by about 10 seconds.

Group Policy

My second surprise was the number of calls to check Windows Group Policy settings. Group Policy is a rather ad-hoc mechanism added to Windows to allow administrators to control networked computers on their domain. Each group policy is stored in a registry key, and when Windows goes to do an operation controlled by group policy, it reads that registry key to see what it should do. I was surprised at the amount of registry activity that goes on reading and checking group policy settings. Besides annoying users by restricting what they can do on their computer, it appears group policy causes a general high overhead of excessive registry reading in almost every aspect of Windows operation. There is nothing you can do about this, but it appears as Windows goes from version to version, that more and more gets added to this and the overhead gets higher and higher.

Auto-Updates

You may not think that you install that many programs on your computer, so you shouldn’t have these sort of problems but remember many programs including Windows/Microsoft Update, Adobe Updater and such are regularly installing new programs on your computer. Chances are these programs are leaving behind unused bits of older versions that are cluttering up your file system and your registry.

Auto-Run Crap

Related to auto-updates, it appears that so many programs now run as icons in the task bar, install Windows services or install programs to run when you log-in. All of these slow down the time it takes you to boot Windows and to sign-in. Further many of these programs, say like Dropbox, will keep frequently polling their server to see if there are any updates. Microsoft has a good tool Autoruns for Windows which helps you see all the things that are automatically run and help you remove them. Again this can be a bit dangerous as some of them are necessary (perhaps like a trackpad utility).

Similarly it seems that everyone and their mother wants to install browser toolbars. Each one of these will slow down the startup of your browser and use up memory and possibly keep polling a server. Removing/disabling these isn’t hard, but it is a nuisance to have to keep doing this.

Hard Disk Fragmentation

Another common problem is hard drive fragmentation. As your system operates the hard disk becomes more and more fragmented. Windows has a de-frag program that is either scheduled to run when your computer is turned off or you never bother to run it by hand. It is worth de-fragging your hard drive from time to time to speed up access. There are third party de-frag programs, but generally I just use the one that comes built into Windows.

Related to the above problems, often un-installation programs leave odds and ends files around and sometimes it’s worth going into explorer (or a cmd prompt) and deleting folders for un-installed programs. Generally it reduces clutter and speeds up operations like reading all the folders under program files.

Dying Hard Drives

Another common cause of slowness is that as hard drives age, rather than just out right failing, often they will start having to retry reading sectors more. Windows can mark sectors bad and move things around.  Hard drives seem to be able to limp along for a while this way before completely failing. I tend to think that if you hear your hard drive resetting itself fairly often then you should replace it. Or when you defrag if you see the number of bad sectors growing, then replace it.

Summary

After going through this, I wonder if the people that just reformat their hard drive each year have the right idea? Does the time spent un-installing, registry cleaning, de-fragging just add up to too much? Are you better off just starting clean each year and not worrying about all these maintenance tasks? Especially now that it seems like we replace our computers far less frequently, is Bit-Rot becoming a much worse problem?

Written by smist08

May 4, 2013 at 2:31 pm

User Roles and Security in Sage 300 ERP

with one comment

Introduction

Role based security and user roles are terms that are in vogue right now in many ERP systems. Although Sage 300 ERP doesn’t use this terminology, it is essentially giving you the same thing. This blog looks a bit at how you setup Sage 300 ERP application security and how it matches role based security.

Users

First you create your Sage 300 ERP users. This is a fairly straight forward process using the Administrative Services Users function.

user1

Here you create your users, set their language, initial password and a few other security related items.

Security Groups

Security Groups are your roles. For each application you define one of these for each role. For instance below we show a security group for the A/R Invoice Entry Clerk role. In this definition we define exactly which functions are required for this role.

secgrp

Some roles might involve functions from several applications in this case you would need a security group for each application, but they can all be assigned together for the role.

User Authorizations

User Authorizations is where you assign the various roles to your users. Below I’ve assigned myself to the A/R Clerk role.

userauth

If multiple applications are involved then you would need to add a group id for each application that makes up the role.

Thus we can create our users. We can create our roles which are security groups in Sage 300 ERP terminology and then assign them to users in User Authorizations. As you can see below signing on as STEVE now results in a much more uncluttered desktop with just the appropriate tasks for my role.

desksec

Further Security

As you can see above in the Users screen there are quite a few security options to choose from depending on your needs. One thing not to forget is that there are a number of system wide security options that are configured from the Security… button in Database Setup.

dbsec

Also remember to enable application security for the system database for you companies. For many small customers, perhaps application security isn’t an issue. I’ve also seen sites where everyone just logs in as ADMIN. But if you have several users and separation of duties is important then you should be running with security turned on.

dbsec2

Where is Security Implemented?

In the example above we see how security has affected what the user sees on their desktop. Generally from a visual point of view we hide anything a user does not have access to. This means setting up security is a great way of uncluttering people’s workspaces. However this is a visual usability issue, we don’t want people clicking on things and getting errors that they aren’t allowed. Much better to just provide a cleaner slate.

But this isn’t really security, perhaps at most it’s a thin first layer.  The real security is in the business logic layers. All access to Sage 300 functions go through the business logic layer and this is where security is enforced. This way even if you run macros, run UIs from outside the desktop, find a way to run an import to something you don’t have access to, it will all fail if you don’t have permission.

Summary

Sage 300 ERP security is a good mechanism to assign users to their appropriate roles and as a result simplify their workspace. This is important in accounting where separation of duties is an important necessity to prevent fraud.

Sage 300 ERP – Data Integrity

with 2 comments

Introduction

Modern ERP systems maintain a company’s full financial history for many years. People want to be confident that all that data is correct and makes sense. So how can you be confident that your database has full referential integrity? Especially after years and years of operation. The Sage 300 ERP Data Integrity function is a way to validate the integrity of a database. Modern computers are much more reliable these days than when our Data Integrity function was originally written, but it still serves a good purpose. In this article we will explore some of the protections to protect data integrity in Sage 300 along with some of the possible causes of corruption.

Database Transactioning

The number one protection of data integrity in Sage 300 is database transactioning. Data is always written to the database in a database transaction. Database transactions always take the database from one state with full data integrity to the next state with data integrity. A database transaction is guaranteed by the database server to either be all written to the physical database or none of it is written, the database server guarantees that you will never see part of a transaction.

For instance as we post a G/L batch, we post each entry as a database transaction and since each entry in a G/L batch must be balanced we guarantee via database transactioning that the G/L is always in balance and hence data integrity is maintained.

Where Do Integrity Errors Come From?

Database transactioning sounds great, and in fact with database transactioning we see very few database problems in Sage 300 ERP. But when we do get integrity problems where do data integrity errors come from?

Below is a list of some of the main causes of data integrity problems. I’m sure there are more. I’m not looking to blame anyone (including myself), just to point out the main causes I’ve seen:

  • Bugs in the program. If Sage 300 asks SQL Server to store incorrect data, it will do so in a completely reliable transactional manner. Hopefully our QA processes catch most of these and this doesn’t happen often; but, Sage 300 is a large complicated program and mistakes happen.
  • People editing database tables directly in SQL Server Enterprise Manager. For various reasons people might try to put something in the database that the program doesn’t allow, and often this leads to database corruption.
  • Third party programs that write to the Sage 300 database directly. We do a lot of data validation checking in our business logic before allowing data to be written to the database, but if this is bypassed then corruption occurs. A common one in this case is not handling currency decimal places correctly.
  • Data validation needs to be tightened. Now and again, someone has written data that we accepted as valid that wasn’t. Then we had to tighten our data validation routines. The good news here is that we’ve been doing this for a long time now.
  • Bug in the Database Server. We’ve seen database indexes get corrupted which can lead to further problems either after the indexes are fixed (because of other data written as a result).
  • Partial backups or restores. We’ve seen people back up the tables for each application independently and then restore them. Perhaps to try to put A/R back to yesterday. But this corrupts the database since there is often matching data that needs to be in sync in Bank, Taxes or perhaps Order Entry. Make sure you always backup and restore the database as a whole.
  • Hardware glitches. Even with CRC checking and such, strange errors can start to appear from hard disk or memory hardware failures in computers.

The Data Integrity Checker

To find these sort of problems Sage 300 ERP has a data integrity checker in its Administrative Services. The main screen looks like:

dataint1

You select the applications you want to check and whether you want to fix any minor errors. Since this can be a long process, for several applications you can also configure which parts within the application to check by selecting the application and choosing the application options in a screen like:

dataint2

The end result is a report is run that lists all the errors found.

What Does the Integrity Checker Look For?

So what does the Integrity Checker do? Below is a list of some of the checks that are typically made:

  • Check the integrity of each file reading each record and calling the View Verify API which will call the business logic to validate the record. This includes things like checking the decimals of a money amount are correct, that the data is correct for the data type, that foreign keys are valid.
  • For Header/Detail type relationships there are often total or summary fields in the header like the total amount of an order or the number of detail lines. The integrity checker will read through the details and add up any of these numbers to ensure they match the header.
  • Check the database for any detail records that don’t have a matching header record (orphans).
  • Each application then knows about all sorts of cross file relationships that must be maintained and the Integrity Checker for that application will validate all of these relationships.

What Does Fix Minor Errors Do?

There is the check box to fix minor errors, but what does it do? Mostly it fixes up header/detail relationships by fixing any total or summary fields in header records. It can also delete orphan detail records. But generally it doesn’t attempt much because we don’t want to risk making things worse.

But it’s Slow

The big complaint about the Data Integrity checker is that it’s slow. This is because it does go through every record in the database as well as checking all the cross dependencies. These days we see company databases that are hundreds of gigabytes in size. Generally the complaint is that you can’t just run it as routine maintenance overnight. That you tend to have to configure what you want to run and do that selectively. It’s also best to run it when people aren’t in the system since it does put a fair bit of load on the system.

But this does open up an opportunity for third party developers. Companies like Tairox and Orchid offer solutions to automate data integrity or to run it as a service.

Summary

Even with super reliable modern databases and hardware, data integrity errors can still creep in and need to be dealt with. Just being aware they exist is half the battle. Also remember that it is extremely important to have regular full backups of your data in case of a really catastrophic failure.

 

The Sage 300 ERP Java API

with 9 comments

Introduction

With version 6.0A of Sage 300 ERP we introduced a native Java API to all the Sage 300 Business Logic (Views). We did this in support of our SData implementation which we wrote in Java. This API allows Java programmers to access all the Sage 300 ERP business logic along the same lines as our .Net API and our COM API. This API isn’t built on top of either COM or .Net, it talks directly to the underlying C DLLs in System Manager. This then provides better performance, as well as allows us to compile this part of the system for Linux with no Microsoft dependencies. Internally we usually refer to this API as SAJava.

All the Sage 300 Business Logic objects have the same API, this makes it easier for us to produce these different APIs to facilitate interoperability with all sorts of external systems, allowing the programmers there to write code in a natural manner where any required interop layer is provided by us. The Java API uses a Java Native Interface (JNI) interop layer to talk to our Windows DLLs (or Linux shared objects). This is a one way communication where we only use this to call the DLLs, we never have the DLLs calling our Java code (this direction is often dangerous and leads to the problems often encountered with JNI). Our JNI code handles all the data conversions between Java and C as well as provides exception handling to trap and handle exceptions that can happen in C code (like bad pointers).

I’ve blogged about this API a bit indirectly in the past when talking about how to write server side code for our SData service, for instance here, here and here. Generally to add custom programming to SData feeds you write Java classes that inherit from our standard SData classes to provide this. When you interact with the Views in this environment you use this Java API, but all the libraries are already included and all the details of signing on are handled for you. The framework starts you off at a point where you can directly open and call Views. In this posting we’ll back up a bit to cover full usage, unlike the case where the SData programming framework does a lot of the work for you. So that you can use this API directly in isolation without requiring any other framework.

Getting Started

First to use the Java API, you need to include its jar file into your project. This file is located in the Tomcat\lib folder. This changed a bit between version 6.0A and then the 2012 version. For 6.0A the folder is: C:\Program Files (x86)\Common Files\Sage\Sage ERP Accpac\Tomcat\lib and the file is SystemManager.jar. For the 2012 version the folder is: C:\Program Files (x86)\Common Files\Sage\Sage 300 ERP\Tomcat\lib and the file is com.sage.accpac.sdk.accpac.sajava-6.1.jar. Then you need to import the classes into any source file that uses them via:

import com.sage.accpac.sm.*;

Once you have these things included in your Java project you can start creating objects and calling methods. However due to security you first must sign-on to a session and then create all other objects from this session.

The documentation is in the form of JavaDoc and is located on the DPP Wiki. The 2012 version is here: http://dppwiki.sage300erp.com//javadocs/v6.1/SystemManager/. You can find all the classes, methods and properties here. To access this, you must be part of the Sage 300 ERP Developer Program. A key benefit to joining this program is access to this wiki which contains all the developer documentation that we produce.

Signing On

First you must create a session with some code like:

Session session;
session = new Session(new ProgramSet(), new SharedDataSet(), "ADMIN",
     "ADMIN", "SAMINC", new Date());

This will sign-on your session to the company SAMINC using today’s date as the session date. The ProgramSet and SharedDataSet are used when we deploy in a hosted configuration and run multi-tenant. In this case they must be setup correctly by the system to configure which tenant this session is for. In most normal on-premise applications the indicated calls are fine to give the one default tenant that exists.

Then you must create a program from the session:

Program program;
program = new Program(session, "XZ", "XZ0001", "61A");

If you read my last blog post, this might appear a bit backwards to the COM API where this looks like the session.Init call that comes first. This is true, but the information is required regardless.

Using Views

Now that you have a program you can start opening and using Views. As an example, let’s look at a method that enters A/R Invoices. Like many things I started with macro recording to get the right Views and some syntax. Macro recording produces VBA code, but it isn’t hard to convert this to Java quickly. Anyone familiar with Sage 300 ERP macro recording will recognize the style and variable names in the following method. This method assumes there are class variables for the program and session that were created as indicated above. The key point of the following example is to show how to open Views, compose Views and then use the Views. For more general information on Sage 300 ERP’s Views have a look at this and this.

    public String enterARInvoices()

    {

        int iEntry;
        int iDetail;
        int numEntries = 20;
        int numDetails = 5;
        String sBatchNum;

        View ARINVOICE1batch = new View(program, "AR0031");
        View ARINVOICE1header = new View(program, "AR0032");
        View ARINVOICE1detail1 = new View(program, "AR0033");
        View ARINVOICE1detail2 = new View(program, "AR0034");
        View ARINVOICE1detail3 = new View(program, "AR0402");
        View ARINVOICE1detail4 = new View(program, "AR0401");
        View ARCUSTOMER1header = new View(program, "AR0024");

        ARINVOICE1batch.compose ( ARINVOICE1header );
        ARINVOICE1header.compose (ARINVOICE1batch, ARINVOICE1detail1, ARINVOICE1detail2, ARINVOICE1detail3, null);
        ARINVOICE1detail1.compose (ARINVOICE1header, ARINVOICE1batch, ARINVOICE1detail4);
        ARINVOICE1detail2.compose (ARINVOICE1header);
        ARINVOICE1detail3.compose (ARINVOICE1header);
        ARINVOICE1detail4.compose (ARINVOICE1detail1);

        // Create the batch

        ARINVOICE1batch.recordGenerate(RecordGenerateMode.Insert);
        ARINVOICE1batch.set("PROCESSCMD","1");      // Process Command

        ARINVOICE1batch.process();
        ARINVOICE1batch.read(false);

        sBatchNum = ARINVOICE1batch.get("CNTBTCH").toString();

        // Loop through creating the entries

        for ( iEntry = 0; iEntry < numEntries; iEntry++ )
        {
            try
            {
                ARINVOICE1detail1.cancel();
                ARINVOICE1detail2.cancel();
                ARINVOICE1header.recordGenerate(RecordGenerateMode.DelayKey);
                ARINVOICE1detail1.recordClear();
                ARINVOICE1detail2.recordClear();

                ARINVOICE1header.set("PROCESSCMD","4");

                ARINVOICE1header.process();

                if ( false == ARCUSTOMER1header.goNext() )
                {
                    ARCUSTOMER1header.goTop();
                }

                ARINVOICE1header.set("IDCUST", "1200");

                for ( iDetail = 0; iDetail < numDetails; iDetail++ )
                {
                    ARINVOICE1detail1.recordClear();
                    ARINVOICE1detail1.recordGenerate (RecordGenerateMode.NoInsert);
                    ARINVOICE1detail1.process();

                    ARINVOICE1detail1.set("IDITEM", "CA-78" );                     // Item Number

                    ARINVOICE1detail1.insert();

                }

                ARINVOICE1header.insert();
            }
            catch( Exception e )
            {
                int count = program.getErrors().getCount();
                if ( 0 == count )
                {
                    e.printStackTrace();                   
                }
                for ( int i = 0; i < count; i++ )
                {
                    System.out.println(program.getErrors().get(i).getMessage());
                }
            }
        }
        ARINVOICE1batch.dispose();
        ARINVOICE1header.dispose();
        ARINVOICE1detail1.dispose();
        ARINVOICE1detail2.dispose();
        ARINVOICE1detail3.dispose();
        ARINVOICE1detail4.dispose();
        ARCUSTOMER1header.dispose();

        return( sBatchNum );
    }

Notice that you can explicitly close things by calling the dispose method. This is usually preferred to waiting for the Java garbage collector to reclaim things, it tends to keep down resource usage if you are opening and closing things a lot.

Errors

If a call fails, there are a couple of cases. If it’s a simple expected thing like reaching the end of records when fetching through them then the routine will return a simple return code that you can easily handle in your code. If something worse happens then the routine will throw an exception. As in other Sage 300 ERP APIs, there is an error stack which will contain possibly a number of error messages explaining what went wrong. In the catch expression above we first check if there are any errors on the error stack, if not then we print the stack trace to allow debugging of what went wrong. Otherwise we loop through the Sage 300 errors and print them for diagnostic purposes. When programming Sage 300 ERP, always make sure you have an error handler as it can give you very good information when debugging your program.

Summary

The Sage 300 ERP Java API gives yet another tool for integrators to integrate to Sage 300 ERP from external systems. It is ideal for Java programmers who would like to write their integration entirely in Java. This is often a benefit when the SDK for the external system is itself written around the Java programming language.

The Sage Hybrid Cloud

with 3 comments

Introduction

We introduced the concept of the Sage Hybrid Cloud along with a number of connected services at our Sage Summit conference back in August. This is intended to be a cloud based platform that greatly augments our on-premises business applications.

This blog posting will look at this platform in a bit more depth. Keep in mind that this platform is still under rapid development and that things are changing rapidly. If we think of better ways to do things, we will. We are approaching this with an Agile/Startup mentality, so we aren’t going to go off for years and years and develop this platform in a vacuum. We will be developing the functionality as we need it, for our real applications. This way we won’t spend time developing infrastructure that no one ends up using. Plus we will get feedback quicker on what is needed, since we will be releasing in quick cycles.

The Hybrid Cloud Platform

Below is a diagram showing the overall architecture of this platform. We have a number of cloud services hosted in the MS Azure cloud. We have a number of Sage business applications with a connector to this cloud. Then we have a number of mobile/web applications built on top of this hybrid cloud platform. Notice that pieces of this platform are already in use, with Sage Construction Anywhere (SCA) being a released product and then Sage 300 CRE already having a connector to this cloud to support the SCA mobile application.

The purple box at the bottom represents our current APIs and access methods, and just re-iterates that these are still present and being used.

The red box indicates that we will be hosting ERPs in this environment in a similar manner to our current cloud offerings like Sage300Online.com. We’ll talk about this in much more detail in future blog posts. But consider this Sage hosted applications version 2.0.

Mobile Applications

We demo’ed a number of mobile applications that we have under development at Summit, some screenshots are here. We are working hard to make these applications provide a first class user experience. We are developing these in various technologies and combinations of technologies to drive the user experience to be the best possible. We are writing both HTML5/JavaScript applications using the Argos-SDK, along with writing applications as native iOS, Windows 8 Metro and Android applications. Plus there are technologies that allow use to combine these technologies to use them both where they make sense in an application.

These mobile applications aren’t just current ERP screens ported to mobile/web technologies, they are whole new applications that didn’t exist before these powerful mobile devices came along to enable these ideas.

ERP Connectors

Each ERP needs to connect to the Hybrid cloud, this is to upload files for items that are needed for lookup in the cloud devices like for finders. As well as to download transactions to enter into the ERP on the connected application’s behalf. The intent is to have one connector for each business application, rather than having to install and configure a separate connector for each connected service (which we hope there will be dozens of).

We want to keep the TCO of the solution as low as possible. To this end we don’t want the end user to have to configure any firewalls, DMZ or web servers. The connector will only call out to the cloud platform. There will never be calls into the connector.  Additionally you only need to configure the connector once with your SageID and away you go.

The connector will use SData Synchronization to synchronize the various files. This way it doesn’t matter if your on-premises ERP is off-line, it will catch up later. This makes the system much more robust since your mobile users can keep working even if you turn all your computers off completely.

SData

We will use SData as the communications mechanism from the hybrid cloud. The cloud will host a large set of SData feeds to be used either by the mobile and web applications or by the on-premises ERP connectors.

Since SData is based on industry standards like REST, Atom, RSS and such, it means it’s easy of pretty much any web or mobile based framework to easily use it. All modern toolkits have this support built in. Plus we provide SDKs like the Argos-SDK that have extra SData support built in.

ISVs

The intent will be that ISVs can use the SData feeds from the Hybrid Cloud as well to develop their own applications or to connect existing cloud based applications to all our Sage business applications. However we won’t start out with a complete database model, we will basically be adding to this cloud data model as we require things for our Sage developed solutions as well as for select ISVs. The intent is to get common functionality going first and then fill it in with the more obscure details later. For instance most connected services will need to access common master files like customers, vendors and items. Then most connected services will need to enter common documents like orders and invoices.

The feeling is that most integrations to ERP systems actually don’t access that many things. So the hope is that once the most common master files are synchronized and once the system accepts the most common transactions, then a great number of applications will be possible.

There will also be parts of the cloud database that don’t have any corresponding part in the ERP. There will be a fair bit of data that resides entirely in the cloud that is specific to the cloud portions of these applications.

SageID

When you are signing on to all these various connected services, we don’t want you to need a separate login id and password for each one. We would like you to register a user-id and password with Sage once and then use that identity for accessing every Sage connected service.

Ultimately we would like this to be the user id and password that you use to sign-on to our on-premises applications as well. Then this would be your one identity for all Sage on-premises and cloud applications. Then all your access rights and roles would be associated with this one identity.

Summary

The Sage Hybrid Cloud is an exciting project. The concept is that it’s starting small with the Sage Construction Anywhere product already shipping and then going to develop quickly as we add other services. This should go quickly since we are leveraging the R&D resources of many Sage products to get new exciting mobile products into market quickly spanning the customer base of many Sage business applications.

Our First Hackathon

with 2 comments

Introduction

Hackathons are becoming a fairly common method to stimulate innovation at companies, software or otherwise. We recently held our first Hackathon here with the Sage 300 ERP development team. We are adding Hackathons to our Sage Innovation Process as an idea generator and a concept tester.

Probably the most famous recent Hackathons are those held at Facebook which resulted in the Like button, Facebook Chat and the Timeline feature. If you Google Facebook Hackatons on YouTube, you can find all sorts of videos showing these. In fact Hackathons are being conducted in industries outside of software development in things like government and food production.

The key goal of Hackathons is to stimulate the creative juices in the organization. To get ideas flowing, to provide a platform to quickly develop them, to show them off and then possibly productize them. In some sense you would like to have everyone creative and innovative all the time, but the pressures of day to day tasks usually damper such things.

Hackathons also give programmers a chance to do projects they’ve always wanted to do and felt were important, but couldn’t convince Product Management to prioritize high enough to get done.

Logistics

We decided to have a two day Hackathon. We had a kick-off meeting just before lunch on Wednesday and then the teams had two days to hack. We then had a results presentation just after lunch on Friday. Some people formed small teams, others worked solo. Basically the two days were up to them. We then provided snacks and lunch on Thursday.

Facebook runs their Hackathons for 24 hours and people don’t sleep. We thought that too extreme. Although some people worked quite long hours getting their hacks to work, no one missed a night’s sleep. Our feeling is that sleep deprivation doesn’t help and is in fact quite destructive. Often a good night’s sleep is what you need to solve difficult problems.

Idea Generation

When we were initially planning the Hackathon, we were worried that people wouldn’t participate because they would have trouble coming up with ideas of what to do. So to try to alleviate this, we came up with a list of suggestions for people.

What we found instead was that this wasn’t a problem at all. We had really good participation and none of our original ideas were used. All teams either had a brain storming session to start with, or had their own ideas that they had been thinking about and just needed an opportunity to explore them.

One key is to give plenty of warning of an upcoming Hackathon, so people can have plenty of time to come up with ideas, and to network with their peers to develop teams.

Results

The results of the Hackathon greatly exceeded our expectations. All the teams, except for one that had to deal with an emergency issue, were able to demonstrate useful and exciting results.

The projects were very diverse including:

  • testing out a new automated test tool
  • evaluating running static code analysis tools on our code
  • developing a new customer information connected service
  • created a better tool to create knowledge base articles
  • created a direct to customer advertising feature
  • added a key CRM integration feature
  • adding Skype and Google Maps integration
  •  fixing some long standing annoyances that never made the priority list.

Below is a picture of the winning team, that hacked in a number of useful social media integrations to the Sage 300 ERP product, including a rating system for things like customers (similar to Amazon ratings), integrations to Skype, Google Maps and a number of other things.

Write Up

We insisted that each group write up their results. We wanted to document all learning’s. We want to know things tried that didn’t work out as well as successes. We did this via a page on our internal development Wiki. This is a very important part of Hackathons since you want to build on everything accomplished.

Learning’s

Our summary presentations went a bit long because everyone was so excited to show so much; however, we decided that next time we will limit each presentation to 5 minutes, since the whole presentation went quite long.

The idea of giving awards turned out to be quite controversial. We had a best project award and a better luck next time award. Everyone felt we should get rid of the better luck next time award. Some people like the idea of having a “winner”, others felt that it corrupted the hackathon process by motivating people to produce visual fluff over perhaps more technical work.

The idea of the “better luck necessary” award was to celebrate failure, since we want to motivate people to take risks and not be too conservative. However people seemed to think this wasn’t a great idea since a couple of “failures” were actually considered successes since they provided proof that a couple of popular technologies weren’t really ready for prime time.

The two day time frame seemed to work quite well for our staff. No one wanted to switch to the 24 hour no sleep method. Then the consensus was that we should try to repeat the Hackathon every 2 to 3 months. It makes no sense to only do a Hackathon once, you really need to keep doing them regularly to exercise your innovation muscles or they will just atrophy again.

Summary

Hackathons are a great way to stimulate innovation in an organization. Not only do you generate a lot of ideas, but you often pick off some low hanging fruit. Or you have a POC (proof of concept) to prove out an idea to productize. Hackathons have been used successfully at many companies in many industries and our own experience was very positive.

Written by smist08

October 20, 2012 at 10:44 am

The Road to DevOps Part 2

leave a comment »

Introduction

Last week we looked at an introduction to DevOps and concentrated on the issues around frequently deploying new versions of the software. This week we are continuing to look at DevOps but concentrating on issues with maintaining and monitoring the system during normal operations. This includes ensuring the system is available, provisioning new users, removing delinquent users and generally monitoring the system and ensuring it is healthy.

SLAs

Generally everyone wants a service that is always available always healthy and working well. But this is too vague a statement and the reality is that things happen and need to be dealt with. This has to be acknowledged up front and strategies put into place to deal with them. First you need stronger guidelines and this usually starts with a Service Level Agreement (SLA) that is laid out for your customers. This details various metrics that you are promising to achieve and what happens when you don’t. Generally you need a good set of performance metrics to judge your service against.

Some of the common performance metrics are:

  • Throughput: System response speed.
  • Response Time: How quickly will a given issue be resolved?
  • Reliability: System availability.
  • Load balancing: When elasticity kicks in.
  • Time Outages: Will services be unavailable during that time?
  • Service Slow down: Will services be available, but with much lower throughput?
  • Durability: How likely to lose data.
  • Elasticity: How much a resource can grow?
  • Linearity: System performance as the load increases.
  • Agility: How quickly the we responds to load changes.
  • Automation: Percent of requests handled without human interaction.

Even if you don’t publish these metrics externally you need to track these to know how you are doing.  Generally a DevOps team takes the approach of continual improvement (like Kaizan). A good DevOps team has dashboards that track these metrics and are always looking for ways to improve them.

Monitoring

A basic rule with cloud applications is that you need to instrument and monitor everything. First this allows you to generate your SLA dashboard and ensure you are meeting your SLA. Second this lets you provide feedback back into development on what is working well and what is working badly. For instance you can track how much a given feature is being used, or perhaps how many people start using a feature, but don’t complete the operation. This could highlight a usability problem that needs to be addressed.

Similarly for performance optimizations. You don’t want to bother optimizing something that is infrequently used. But form good monitoring, for instance you can see a query that is being run very frequently and not delivering good performance. Attacking this would be helpful, both for people issuing the query and for other people perhaps slowed down while these slower queries are being processed.

The key point being to address what really matters, based on hard facts gathered by good instrumentation on what is really affecting your users. Perhaps you don’t need a monitoring center like the one below, but it sure would be cool.

Provisioning

Another operation that hopefully is going on at a rapid pace is provisioning new users. And then the reverse, hopefully at a very slow pace, is removing users. In normal operations, users should be able to sign up for your service very easily, perhaps filling out a web page, and then acknowledging a confirmation e-mail. All this should be done pretty much instantly.

What should not happen, is that the user fills out a web form, which is then submitted to a queue to be processed, then in the data center, someone reads this request and performs a number of manual steps to setup the user. Then hours or days later an e-mail is sent to the customer letting them know they can use the service.

This is really a matter of a DevOps team’s focus on automation. Chances are the steps of the manual process need to be performed, which is ok, as long as they can be automated (scripted) and performed automatically quickly eliminating any time dely. Generally any DevOps team is always looking to find any manual process and eliminate (automate) it.

Elastic Operations

Most people don’t buy their own data centers or their own server hardware anymore. Especially when starting up, you don’t want to make a huge investment in capital equipment. Most people use an IaaS or PaaS service like Amazon or Azure. These services are “elastic” in that you can run scripts to add capacity or remove capacity so then stretch and shrink with demand. You pay for what you use, so for each server you have running in this environment, you are paying some fee.

Generally a DevOps team should be monitoring the system load and when it hits a certain level, scripts are run that create a new server and add it to the system, so the load is shared by more computing resources. By the same token when usage drops, perhaps on the weekend or late at night, you would like to drop some of these computing resources to save money. Again the DevOps team needs to develop the necessary programs and scripts to support this sort of operation in an automatic manner (you don’t want to be paying someone to juggle these system resources). Adding resources is usually easier since they come up empty and once they are known to the load balancer they will start being used. When shutting down, you have to monitor and ensure no one is using the system before shutting it down (or have a method to move the active users to another server). Often this is done by stopping new requests going to the server and then just waiting for all the users to logoff or become inactive. Generally if your application in completely stateless, then this is all much easier.

Disaster Recovery

It is also the responsibility of the DevOps team to ensure there is a good disaster recovery plan in place. For instance the Azure and Amazon services have multiple datacenters. You need to control how you application is deployed to these and how backups and redundancy is managed. Generally the higher level of redundancy and the quicker the switch over can cost more money, so you need to make sure your plan is sufficient, but not overdone.

Suppose you use Azure or Amazon and even though you have a redundant deployment to multiple datacenters, the whole service goes down? Some companies actually have redundancy across IaaS providers so if Azure goes down, then they can still run on Amazon. Practically speaking, unless you are very large or have a very tight SLA, this tends to be overkill. Generally you just put in your SLA that you aren’t responsible for the provider being systemically down.

Summary

The transitions from Waterfall to Agile development was an interesting one with a lot of pitfalls along the way. The transition from Agile development to DevOps is a bigger steps and will involve many new learning opportunities. It takes a bit of patience, but in the end should lead to an improved Development organization and happier customers.

 

Follow

Get every new post delivered to your Inbox.

Join 230 other followers