Stephen Smith's Blog

Musings on Machine Learning…

Posts Tagged ‘sage accpac erp

Opening Sage 300 ERP Sessions

with 30 comments

Introduction

Sage 300 ERP has a number of very flexible external APIs that allow programs to access all the business logic in the program. The business logic is stored in Views that are accessed via a standard API. To start using the business logic from one of our external APIs you first need to sign-on to the API and establish a session. This article is going to only talk about the AccpacCOMAPI which is our main COM API. Sage 300 ERP has an older COM API usually referred to as a4wcom, so be sure to use the newer one we are talking about here. Many of the concepts can be adapted to other APIs like the .Net or Java APIs. However to interact with other COM components like the session manager you must be using the AccpacCOMAPI. The examples in this posting will all be in Visual Basic 6.

This API has been around for a long time, but we recently received quite a few queries through customer support on establishing connections. So I thought it might be worth while writing a blog post on some of the use cases we try to support, some of the functionality that perhaps isn’t very widely known as well as the reasons for why some aspects work like they do.

For a bit more background on the Sage 300 business logic have a look at this blog posting.

Libraries

Sage 300 ERP’s COM API can be used by any tool that understands COM and how to talk to COM objects. The first step is to add the COM object to your project. In VB6 you do this by going to Project – References and adding “ACCPAC COM API Object 1.0”. In some tools you can browse to the DLL and add that, in this case you browse to where ever you installed Sage 300 ERP and then browse for runtime\a4wcomex.dll.

Creating and Initializing

Once you have the library available for you, now you need to get started. All objects in our COM API are created via APIs in our COM API. But first you need to get started by creating and initializing a session object. This is the root object and from this everything else is derived. In VB there are a couple of ways to create the initial session object either:

Dim mSession As New AccpacCOMAPI.AccpacSession

Or

Dim mSession As AccpacCOMAPI.AccpacSession
Set mSession = CreateObject(“Accpac.Session”)

Once you have a session object then you need to initialize it:

mSession.Init “”, “XY”, “XY1000”, “61A”

If you are accessing the COM API from an external program and not an SDK application then the parameters don’t matter. The first parameter is for when an SDK application is run from the desktop to connect them up properly and the other parameters are similarly for SDK application for other APIs like getting you applications help files correctly. Generally for an external application you just want these set with valid value so things will proceed. The application ID “XY” is reserved for non-SDK application to use, so you don’t have any risk of having things confused with a third party application. It is important that you call init before doing anything else. If you do call some other method first then expect to get strange error messages.

Below is the object model of all the objects you can get from an initialized session:

objectmodel

Company List

At this point we still haven’t signed into a company. At this point you can really just sign-on, but you can also get a list of companies that you can sign-on to. This is the API used by Sage 300 to build sign-on dialogs. In the session object is an organizations collection that you can traverse to get the information on the available companies.

For i = 0 To mSession.Organizations.Count – 1
Print mSession.Organizations.ItemByIndex(i).DatabaseID,
mSession.Organizations.ItemByIndex(i).Name
Next i

As you can see by the code, this API was invented by a C programmer and not a VB programmer.

Signing On

The main way you sign-on to a company is to use the open method.

mSession.Open “ADMIN”, “ADMIN”, “SAMLTD”, Date, 0, “”

The main thing you need for this method is the user id, password, company id and session date. After calling this, the next thing you usually do is create a database link and then from the database link create your view objects. Now you can call the views and use all the Sage 300 business logic. The disadvantage of this method is that you need to know the user id and password. But otherwise you are good to go.

Session/Signon Managers

Of course with what we have discussed so far you could create your own sign-on dialog. But why re-invent the wheel. The main Sage 300 ERP COM library is intended to be called from both user interface programs or server processes, as a result it has no user interface functions itself, it will never popup a messagebox or a dialog box. It is strictly processing and no UI.

However we do provide a number of other ActiveX controls that are intended to be used as UI components. Two of these are the Signon Manager and the Session Manager. You only interact with the Session Manager and then the Session Manager uses the Signon Manager whenever it needs it.

So if you don’t want to have to know the user id and password then you use the Session Manager to create your session for you and you get back a session that has been created, initialized and opened for you. The user will be able to enter their user id, password and select the company and session date to use for processing.

To use the Sesion Manager you need to add a reference for “ACCPAC Session Manager 1.0” or access the runtime\a4wSessionMgr.dll. Then you would write some code like:

Dim signonID As Long
Dim mSession As AccpacCOMAPI.AccpacSession
Dim sessMgr As New AccpacSessionMgr

sessMgr.AppID = “XY”
sessMgr.ProgramName = “XY1000”
sessMgr.AppVersion = “54A”
sessMgr.CreateSession “”, signonID, mSession

The intent of the session manager was to facilitate things like workflow management. So the first time someone accesses it, it will create new session and the user will get a signon dialog. However the next time it is accessed, you will just get back the session the user opened the first time. This allows applications to be strung together in a workflow type manner without each step requiring the user to sign-on. If you do want a fresh sign-on, you can set the ForceNewSignon property to true. If there are two desktops signed in and ForceNewSignon is false, then the user will get a dialog box to choose which session they want.

Summary

The external APIs to Sage 300 ERP are very powerful. Since the AccpacCOMAPI is used exclusively by our VB forms to access the Sage 300 business logic, you know that from this interface you can do anything that can be done from a regular UI. All business logic is exposed this way. So the intent of this posting was just to give you a little help in getting started to get at all that business logic.

The End of the Smart Phone Era?

with 9 comments

Introduction

I saw this article in Business Insider “The End of the Smart Phone Era is Coming” and was just wondering what effect this would have on business applications like ERP and CRM. Basically will we all ditch our smart phones in exchange for smart eyeglasses? Do we want a virtual world super-imposed over the real world? Is this the way to really be always connected all the time?

Google made a big splash by introducing their vision with this video. Some of the initial reaction ranged from that this was the greatest thing ever to that now you would have absolutely no privacy since Google would see and hear everything you see and hear. Below is a Google glass fashion shoot.

glasses2

Judging by recent patent applications, Microsoft is also working on something similar. Below is a diagram from Microsoft of some of their thinking.

glasses1

ERP and CRM

In my world we’ve been battling with moving fairly complicated business application to mobile devices like tablets and phones. We’ve been battling with fitting large amounts of data onto much smaller screens. In a way large flat panel desktop monitors are great for our applications since you can see and manipulate large amounts of data. But sadly everyone wants to do this on their phone, so how do we do that? At this point we are getting a grip on how to do business applications on devices. We are getting a grip on how to handle touch as the input mechanism instead of the keyboard and mouse. We are getting a grip on how to handle the fact that the app isn’t always connected to the network.

Now we hear that smart phones and tablet are just as obsolete as the desktop PC and laptop! So in this world, not only do we have a small screen, but we have to share it with the real world. Plus we have a whole new input model where it’s a combination of voice recognition and eye tracking technology.

glasses3

I don’t think we’ll want to just super-impose our regular Order Entry screen onto the glasses over the real world. I suspect that rather than port our existing ERP and CRM functionality to glasses, more likely we’ll be re-inventing the way we do many business processes. This probably means a proliferation of new apps.

Physical Inventory Counts

One good application I was thinking of was to do physical inventory counts. This is always a painful but necessary process to catch theft and errors. Now you will be able to run your inventory count app in your glasses. As you walk around the warehouse, you just need to look at boxes and have the glasses record the barcode or QR code to count the inventory. For other items, perhaps you can look at something and then double-blink, the software then compares the visual image to all the pictures in the inventory database to find a match and count that item.

Sales Calls

Now you can have a glasses CRM app. Rather than bring up all your customer information on a tablet and keep referring to your tablet, you can see all the information on a customer right before your eyes. The glasses app will bring up the customer for you automatically based on your location and facial recognition software. Then the glasses can present to you all pertinent information on the customer, like his sales history, buying habits or that he’s late paying his bills. This should really impress your clients since it will appear that you care enough about them to know off the top of your head every detail about them. Then further the glasses can have recorded the whole chat, so if there are any disputes later, they can be reviewed.

Pottery Barn

In our nearby Pottery Barn, the items in the store are for display only. If you are interested in something, you need to talk to a salesperson, who looks up the item on their tablet to find out if they have it in stock in the store, in a local warehouse, in a regional warehouse or will need to get it shipped from the manufacturer.  Now there could be a glasses app that identifies the item you are interested in, perhaps by staring at its QR code and double-blinking. Then it can bring up additional catalog information on the item, including delivery logistics and such. Generally this could streamline the whole (painful) process of shopping at Pottery Barn.

Summary

Will the widespread use of such glasses lead to the true surveillance society? Rather than just a plethora of security cameras recording everyone’s movements, will now everything anyone sees and hears through these glasses be recorded and accessible to law enforcement and the government? Or will we manage the privacy concerns and bring in a new generation of connected uses who look on our current phones as archaic as we look back on the original Motorola brick cell phones?

Written by smist08

December 1, 2012 at 7:48 pm

The Road to DevOps Part 1

with 5 comments

Introduction

Historically Development and Operations have been two separate departments that rarely talk to each other. Development produces a product using some sort of development methodology and when they’ve finished it, they release it and hand it off to Operations. Operations then installs the product, upgrades the users from the old version and then maintains the system, fixing hardware problems, logging software defect complaints and keeping backups. This worked fine when Development produced a release every 18 months or so, but doesn’t really work in the modern Web world where software can often be released hourly.

In the new world Development and Operations need to be combined into one team and one department. The goal being to remove any organizational or bureaucratic barriers between the two. It also recognizes that the goal of the company isn’t just to produce the software but to run it and to keep it available, up to date and healthy for all its customers. Operations has to provide feedback immediately to Development and Development has to quickly address any issues and provide updates quickly back to Operations.

In this posting I’m covering the aspect of DevOps concerned with frequently rolling out new versions to the cloud. In a future posting I’ll cover the aspects of DevOps concerned with the normal running of a cloud based service, like provisioning new users, monitoring the system, scaling the system and such.

Agile to DevOps

We have transitioned our software development methodology from Waterfall to Agile. A key idea of Agile is that you break work into small stories that you complete in a single short sprint (in our case two weeks). A key goal is that during the sprint each story is done completely including testing, documentation and anything else required. This then leads to the product being in a releasable state at the end of each sprint. In an ideal world you could then release (or deploy) the product at the end of each sprint.

This is certainly an improvement over Waterfall where the product is only releasable every year or 18 months or so. But generally with Agile this is the ideal, but not really what is quite achieved. Generally a release consists of the outcome of a number of sprints (usually around 10) followed by a short regression, held concurrently with a beta test followed by release.

So the question is how do we remove this extra overhead and release continuously (or much more frequently)? This is where DevOps comes in. It is a methodology that has been developed to extend Agile Development and principles straight through to actually encompassing the deployment and maintenance of the product. DevOps does require Development change the way it does things as it requires Operations to change. DevOps requires a much more team bases approach to doing things and requires organizational boundaries be removed. DevOps requires a razor focus on automating all manual processes in the build to deploy to monitor to get feedback cycle.

Development

Most development processes have an automated build system, usually built on something like Jenkins. The idea is that when you check in source code, the build system sees you checked it in, then it has rules that tell it what module it is part of, and rebuilds those modules, then it sees what modules depend on those modules and rebuilds those and so on. As part of the build process, unit tests are run and various automated tests are set off. The idea is that if anything goes wrong, it is very clear which check-in caused it and things can be quickly fixed and/or rolled back.

This is a good starting point but for effective DevOps it needs to be refined. Most modern source control systems support branching (most famously git). For DevOps it becomes even more crucial that the master branch of the product is always in releasable state and can be deployed at any time. The way this is achieved is by developing each feature as a separate branch. Then when a feature is completely ready for release it can be pulled into the master branch, which means it can be deployed at any time. Below is a diagram of how this process typically works:

Automated Testing

Obviously in this environment, it isn’t going to work if for every frequent release, you need to run a complete thorough manual test of the entire product. In an ideal world you have very complete test coverage using various levels of automated testing. I covered some aspects of this here. You still want to do some manual testing to ensure that things still feel right, but you don’t want to have to be repeating a lot of functional testing.

Operations

Operations can then take any release and in consultation with the various stakeholders release it to production. Operations is then in charge of this part of things and ensures the new versions in installed, data is converted and everything is running smoothly.

Some organizations release everything that is built which means their web site can be updated hourly. Github is a good example of this. But generally for ERP or CRM type software we want to be a bit more controlled. Generally there is a schedule of releases (perhaps one release every two weeks) and then a schedule of when things need to be done to be included in that release, which then controls which branches get pulled into the master branch. This is to ensure that there aren’t any disruptions to business customers. Again you can see that this process is blending elements of QA along with Operations which is why the DevOps team approach works so well.

A key idea that has emerged is the concept: “Infrastructure as Code”. In this world all Operations tasks are performed by code. This way things become much more repeatable and much more automated. It’s really this whole idea that you build your infrastructure by writing programs that then do the real work, that has largely led to movement of Developers into Operations.

DevOps

This is where Development and Operations must be working as a team. Operations has to let Developers know what tools (scripts) they require to deploy the new version. They need automated procedures to roll out the new version, convert data and such. They have to be working together very closely to develop all these scripts in the various tools like Jenkins, PowerShell, Maven, Ant, Chef, Puppet or Nexus.

Performing all this work takes a lot of effort. It has to be people’s full time job, or it just won’t get done properly. If people aren’t fully applied to this, manual processes will start to creep in, productivity and quality will suffer.

Beyond successfully deploying the software, this team has to handle things when they go wrong. They need to be able to rollback a version. Reverse the database schema changes and return to a known stable good state.

Summary

DevOps is a whole new profession. Combining many of the skills of Development with those of Operations. People with these skill sets will be in high demand as this is becoming an area that is making or breaking companies on the Web and in the Cloud. No one likes to have outages; no one likes to roll out bad upgrades. In today’s fast paced world, these can put huge pressures on a company. DevOps as a profession and set of operating procedures is a good way to alleviate this pressure while keep up to the fast pace.

Reporting Via Macros

with 71 comments

Introduction

With our Sage 300 ERP 2012 release we updated our Crystal Reports runtime to the newest Crystal 2011 runtime (SP3 actually). The intent is to move to a fully supported version of Crystal Reports, so as they adapt to things like Windows 8 and Windows 2012 Server, we know we are fully supported and can get updates for any problems that show up. Plus it means that people customizing reports can take advantage of any of the new features there.

For reports you can print to preview, print to file or print directly to a printer. Then we have various options for printing from various web contexts like Quotes to Orders. You can drive reports from our regular forms, or you can write VBA macros that automate the reporting process.

This blog post is really for people that are controlling printing reports programmatically and are more affected by the changes in the Crystal runtime and more specifically changes in the Crystal Reports API.

Headache for Customizers

Our intent was that people performing customizations will use our API to drive Crystal Reports. Then your programs are upgrade safe since we maintain compatibility of our COM API. However it turns out that quite a few people have automated the report process by writing to the Crystal COM API directly.

This then leads to a problem because Crystal dropped support for their COM API. Not only did they drop support for it, but they removed it completely from the product. Hence anyone that is writing directly to the Crystal COM API will be broken by the Sage 300 ERP 2012 release. At least for new installs. If you had an older version and don’t un-install it, then you can still use the older version of the Crystal runtime (since it will be still there), but that isn’t a good long term solution as people upgrade computers and go to newer operating systems like Windows 8.

Crystal Reports now only supports a .Net Interface and a Java interface. For this version we had to change our internal interface to Crystal from COM to .Net. (The newer Web portal parts use the Java interface and so were ok).

Printing without User Intervention

It appears that one of the common reasons to go to the Crystal API directly is to print to file without any manual intervention. Often if you choose File as a print destination then we prompt you for the format and then prompt you for the file name to save to. People want to set these programmatically. Our API does have the ability to do this in a couple of situations.

Below is a macro I recorded to print O/E Quotes. I deleted any extra code, like the error handler to make it a bit more compact. Then I edited the destination and format to change the print destination to PD_FILE and the format to PF_PDF.

Sub MainSub()
Dim temp As Boolean
 Dim rpt As AccpacCOMAPI.AccpacReport
 Set rpt = ReportSelect("OEQUOT01[OEQUOT01.RPT]", "      ", "      ")
 Dim rptPrintSetup As AccpacCOMAPI.AccpacPrintSetup
 Set rptPrintSetup = GetPrintSetup("      ", "      ")
 rptPrintSetup.DeviceName = "HP LaserJet P3010 Series UPD PS"
 rptPrintSetup.OutputName = "WSD-ad0e8bc6-396c-4e50-84c7-fab17beaf18a.006a"
 rptPrintSetup.Orientation = 1
 rptPrintSetup.PaperSize = 1
 rptPrintSetup.PaperSource = 15
 rpt.PrinterSetup rptPrintSetup
 rpt.SetParam "PRINTED", "0"              ' Report parameter: 4
 rpt.SetParam "QTYDEC", "0"               ' Report parameter: 5
 rpt.SetParam "SORTFROM", " "             ' Report parameter: 2
 rpt.SetParam "SORTTO", "ZZZZZZZZZZZZZZZZZZZZZZ"   ' Report parameter: 3
 rpt.SetParam "SWDELMETHOD", "3"          ' Report parameter: 6
 rpt.SetParam "PRINTKIT", "0"             ' Report parameter: 7
 rpt.SetParam "PRINTBOM", "0"             ' Report parameter: 8
 rpt.SetParam "@SELECTION_CRITERIA", "(({OEORDH.ORDNUMBER} >= """") AND 
     ({OEORDH.ORDNUMBER} <= ""ZZZZZZZZZZZZZZZZZZZZZZ"")) AND (({OEORDH.COMPLETE} = 1) OR 
     ({OEORDH.COMPLETE} = 2)) AND ({OEORDH.TYPE} = 4) AND (({OEORDH.PRINTSTAT} = 1) OR 
     ({OEORDH.PRINTSTAT} = 0) OR ({OEORDH.PRINTSTAT} = -1))"   ' Report parameter: 0
 rpt.NumOfCopies = 1
 rpt.Destination = PD_FILE
 rpt.Format = PF_PDF
 rpt.PrintDir = "c:\temp\quote.pdf"
 rpt.PrintReport
End Sub

 

Basically this technique will silently export to PDF files. The Format member also accepts PF_RTF which will export silently in RTF format. The file they export to is specified in the PrintDir property. If this is a folder, the filename will be the same as the report name, if it’s a filename, it will be that (make sure you get the extension right).

You can also export silently to HTML format by setting the Destination to PD_HTML. For HTML if there is one file then the PrintDir specifies the filename, but sometimes HTML required multiple files, in which case it will use the PrintDir as a folder name.

These are a few ways you can print reports to a file silently without user intervention.

Secret Parameters

In the recorded macro above, you might notice the strange parameter “@SELECTION_CRITERIA”. If you look in the report, there is no such report parameter. Basically our API lets you set report parameters and then print the report. However there are a few other things you might want to do with the Crystal Reports API. Below is a list of these special parameters that might help you get a grip on some more aspects of the Crystal Reports API:

“@SELECTION_CRITERIA”: PESetSelectionFormula (job, asParamValue). This parameter is changed into the API to set the Selection Criteria in the report.

“@SELECTION_CRITERIA_xxxx”:  where  xxxx is the name assigned to the subreport when it was first created. This call will be translated into

job = PEOpenSubreport( parentJob, xxxx)
PESetSelectionFormula (job, asParamValue)

To set the selection criteria in the designated subreport.

“@SELECTION_ADDCRITERIA”: will add the parameter specified to the selection criteria that exists inside the report.

“@TABLENAME”: The parameter value must be in the form:

“table” “name”

Each instance of “table” will be switched to “name” before each PESetNthTableLocation call. This is done for both the main report and all subreports. You must put the table and name in quotes or the parameter will be rejected. Table names are not treated as case sensitive.

“EMAILSENDTO”:   The call will be set the email address when you are using PD_EMAIL.  This will suppress the popping up of the address book dialog.

“EMAILSUBJECT”: The call will be set the email subject when you are using PD_EMAIL.

“EMAILTEXT”: This parameter will be set the email body when you are using PD_EMAIL.

“EMAILPROFILE”:  This parameter will be set the email profile when you are using PD_EMAIL.

“EMAILPROFILEPWD”: This parameter will be set the email password when you are using PD_EMAIL. This password will be used to sign-in to MAPI.

The following are some parameters that if you create them in your report, then the system will automatically set them without you having to do any programming.

“CMPNAME”: The company name.

“ACCPACUSERID”: The Sage 300 User ID.

“ACCPACSESSIONDATE”: The session date from signon.

“ALIGNMENT”: The current alignment option.

“REGIONALFMT” : The current regional format.

Summary

We went through this Crystal API pain once before when Crystal dropped their DLL interface (crpe32.dll). At that point we had to change all API access over to COM. At that point we had similar issues due to various people using the DLL interface and now having to re-code for the COM interface. At that point we did add some functionality to our API, namely we added the TABLENAME special parameter, so that a number of people could start using our API.

Hopefully most people can switch from using the Crystal COM API directly to using the Sage 300 ERP API, however if something is missing, please comment on this blog, so we can consider expanding our API in the future. I’ve already gotten a couple of requests to add silently exporting to Excel format in addition to RTF and PDF. Keep in mind that like we have gone from DLL to COM to .Net, chances are in a few versions, the Crystal API will change again (perhaps to a REST web services API) and we will have to go through this again.

Written by smist08

September 15, 2012 at 7:50 pm

Sage 300 ERP 2012 RTM

with 3 comments

Yes, Sage 300 ERP 2012 has been “Released to Manufacturing”. In a way this is really a “Release to Marketing”, since we don’t really manufacture much anymore, it gets posted for download and then sales and marketing takes over. I think everyone prefers keeping the acronym RTM rather than changing to RTW for “Released to the Web”. I previously summarized all the great things in the release in my Sage 300 ERP 2012 posting.

It’s been a lot of hard work and a tumultuous journey since we release 6.0A at the end of 2010. But we are really happy with the release, it includes many useful new features as well as building on a number of foundations ready for future development.

Now that we are RTM, business partners should be able to start downloading it on Sept. 5 and DVDs should be available by Sept. 18.

Rebranding

Sage Accpac ERP is now release with the new Sage branding and is now Sage 300 ERP 2012. This means we now match the revamped Sage web site and fit in nicely with all the new sales and marketing material. Hopefully now we can fully leverage and build on the Sage brand to ensure people are familiar beforehand with our products.

In addition our editions are changing. It would be confusing to have Sage 300 ERP 200 Edition 2012. So 100, 200 and 500 editions become Standard, Advanced and Premium Editions. Hence something like Sage 300 Advanced ERP 2012.

Manufacturing

When I started with Computer Associates on the original CA-Accpac/2000 project, manufacturing was a much bigger deal than it is today. In those days we produced a boxed product that consisted of floppy disks, printed/bound manuals, many special offer cards and the copy protection dongle all shrink wrapped in plastic.

Back in the 90s we had quite a complicated schedule of when everything had to be submitted so that it could all come together on our release date. For instance manuals took 1 month to get printed, and disks took 1 week to get duplicated and labeled (if we were lucky). So the technical writers had to be finished a month ahead of the programmers. Similarly any included marketing material, as well as the design for the box had to all be submitted quite early.

Back then we released on 3 ½ inch 720K floppy disks (they were actually in hard plastic by this point). Each module took 6 or 7 disks, so you had a stack of disks for System Manager, a stack for General Ledger and so on. Generally a single 720K floppy was quite a bit more expensive than a blank DVD is today.  (In fact the first version of Accpac was released on 8” floppies for the North Star CPM computer, but that was before my time).

After we shipped the gold master floppy disks off to manufacturing, we still had one week to QA while they were being duplicated. We would continue regression testing through the week looking for any more serious issues. If something was found, it was quite expensive, since usually any manufactured floppies were thrown away and new ones were duplicated.

For a while we produced 5 ¼” floppy disks which were available by demand. With version 3.0A we switched entirely to CDs, but we still shipped one module per CD. With CDs it then became practical to provide things like PDF versions of manuals on the CD along with other extras that were impractical on floppy disks.

One thing with having all the modules on separate CDs was that we could stagger the release, so we would release first perhaps SM and the financial modules then the operation modules a few months later and the Payroll modules a few months later still and various other things even later. The end result being that when we first announced RTM on a version, then it would be nearly a year before all the modules, options, integrations, translations, etc. were all fully released.

Now there is only one RTM for a version and this RTM includes everything on one download image (or one or two DVDs). This includes all ERP modules, all documentation, all options products, translations in five languages and all integrations (like CRM and HRMS). So now when we RTM, a customer knows that all Sage components they need are ready and they can go ahead and start the upgrade process. We also work with all our ISVs to try to get their products certified for the new version as quickly as possible.

These days everything is on-line, so the web site needs to be ready to link to the new release and then we provide the download images that are posted there. We still produced a gold master DVD, since people can order these if they want them (for a fee).

Release Cycles

Although not visible outside of development, we also run our release cycles quite differently now than we used to. In the early version all the coding was done first, then when we decided it was code complete we threw it over the wall to QA and went through a long find and fix bugs phase. Generally we shared QA with Sage 50 Canadian (then known as Simply Accounting) and one product was QA’ed while the other was coded.

Now we use an Agile development process and QA is involved from the start and there is no separate development and QA steps. Nothing is considered code complete or done until it is fully QA’ed and documented. Generally this lead to more accurate schedules and higher quality products.

Summary

We are very excited to be releasing Sage 300 ERP 2012. We hope that people upgrade to it and enjoy using it. We are also excited to be starting work on the next version which also looks very exciting.

 

Written by smist08

September 1, 2012 at 5:04 pm

Export

with 9 comments

Introduction

I’ve just returned recently from our Sage Summit 2012 conference where we showcase all the wonderful new features and technologies we are just releasing in new versions or have under development. However in giving various sessions, it was clear that besides evangelizing all the new work going on, that quite a few people aren’t aware of all the functionality that has been in the product all along. So as we develop many new reporting and inquiry tools, I thought it was a good idea to go back and talk about our good old import/export technology. In this article I’ll be concentrating on how to get data out of Sage 300 ERP using it’s built in technology.

You can run Export from pretty well any data entry screen in Sage 300 ERP. You just open the File menu and choose “Export…”. At which point you get a screen like that below:

Note that there is also a File – Export… menu on every Finder in the system.

Formats

The first question you need to answer is what format to export to. Below you can see the various export formats that we support:

Generally you are exporting the data because you know where you want it, usually in another program, so you can choose the format that works best for you. The two most popular formats are “Excel” and “Single File CSV”. Many people know Excel and like to use this as their format. Excel is a great tool for doing data analysis on ERP data with easy ways to generate graphs, pivot tables, etc.

The reason people use “Single CSV File” is that it is easy to deal with from scripting tools and other programs. These are just text files where the values are separated by commas (hence the name). Generally it’s easy to programmatically open these files and parse them. Another benefit of CSV is that it’s such a simple format, that it exports extremely quickly, so for large files this can be a real benefit. Also Excel has no problem reading CSV files.

If you are writing your own programs to process these files, consider XML as most languages, these days, have excellent libraries for parsing and processing XML files.

Another advantage of CSV and XML type files is that Excel has limits on the number of columns and rows that are allowed. This varies by version of Excel. However the CSV and XML file formats don’t have any limitations on them (you just may or may not be able to load them into Excel).

Select Fields/Tables

After you’ve selected the format and the filename to export to, next you check and uncheck all the boxes to select the various fields that you want. This includes the fields in the main table along with any detail tables. Many documents in Sage 300 consist of header records with a number of detail tables so for instance each Order header record has multiple detail records for all the items that make up the order.

A lot of times people just stick to the default of all fields selected, since it doesn’t make a lot of difference whether the extra fields are there or not.

Note that you can right click on the table name to select or un-select all the fields in that table (a bit counter-intuitive since clicking on the table check box doesn’t do anything). You can also rename the table if you want it to appear in the export file with a different name.

Set Criteria

Suppose you export all your customers to Excel, but most of it is taken up by inactive customers that you don’t care about. You could filter these in Excel or you could filter these from Export so they aren’t there getting in the way to start with. Export has a handy “Set Criteria…” dialog where you can define a filter to select the records being exported.

This dialog is an example of “Query by Example” (QBE). You basically specify the field names in the title area of the grid. Then it “and”s items going horizontally and “or”s items vertically. So in the dialog above if I added another item to the right then both would need to be true for the record to be exported. If I added another value below then it would export if the value was either of these. Using this table you can build up fairly sophisticated criteria. If you think better in SQL where clauses, you can hit the “Show Filter” button to see the where clause that you a building.

Load/Save Script

Selecting all those fields and setting the criteria can be a bit of work. Especially if you need to run the same export every day. The solution is to use the “Save Script…” button to save what you’ve done, and then when you return you can use “Load Script…” to get it back. This is the first step in automating the export process.

Exporting From Macros

If you want to automate things further you can drive export from a macro. You can do this with no user intervention whatsoever and have these run on regular basis. Basically you setup all the options you want and save it in a script file. Then from the macro you can execute this script file. Besides the code for this macro, you need to add two references: “Accpac Import/Export 1.0” and “Microsoft XML, v4.0”. The return value from the export operation is an XML file that contains all the details of what happened. I included a bit of code in the macro to parse that XML file to display the number of records exported.

Dim xmlstr, msg As String
 Dim pbstr As String
 Dim doc As New MSXML2.DOMDocument40
On Error GoTo ACCPACErrorHandler ' Set error handler
Set mDBLinkCmpRW = OpenDBLink(DBLINK_COMPANY, DBLINK_FLG_READWRITE)
        Set ie = New ImportExport
        If (Not (ie Is Nothing)) Then
             Dim pStatus As AccpacCOMAPI.tagEventStatus           
            With ie
                ' Open the import/export engine.
                 .Open mDBLinkCmpRW               
                .ExecuteExportScript "c:\temp\custscript.XML", breturn               
                .GetExecuteResult pbstr
                 doc.loadXML (pbstr)
                 doc.setProperty "SelectionLanguage", "XPath"
                 xmlstr = "VIEW[@ID='AR0024']"
                 msg = doc.documentElement.selectSingleNode(xmlstr).Attributes.getNamedItem("Exported").nodeValue
                 MsgBox msg & " record(s) Exported"
                 .Close
             End With  ' ie
        End If
'Cleanup
 Set mDBLinkCmpRW = Nothing
Exit Sub
ACCPACErrorHandler:
   Dim lCount As Long
   Dim lIndex As Long
   If Errors Is Nothing Then
       MsgBox Err.Description
   Else
       lCount = Errors.Count
       If lCount = 0 Then
           MsgBox Err.Description
       Else
           For lIndex = 0 To lCount – 1
               MsgBox Errors.Item(lIndex)
           Next
           Errors.Clear
       End If
       Resume Next
   End If
 End Sub

 

Export from Crystal

Crystal Reports also supports export in quite a few formats. Generally you are choosing if you want to export the formatting or the data or a bit of both. If you want good formatting generally you would export in PDF format (or perhaps one of the Word formats). If you want only the data then export using one of the Excel data-only formats (the ones that don’t say “Data-Only” tend to try to format in Excel which sometimes makes the worksheet hard to work with).

Summary

The built in Export functionality in Sage 300 ERP has been there since version 1.0A but often gets forgotten amid many of the new features. It’s a fairly powerful tool and can solve quite a few data sharing and analysis problems.

Written by smist08

August 25, 2012 at 6:17 pm

Sage 300 ERP 2012 Payment Processing

with 4 comments

Introduction

We introduced an integration from Sage 300 ERP (Accpac) to Sage Exchange in version 6.0A (with a retrofit to 5.6A). This integration allows ERP users to take credit card transactions directly from ERP screens including pre-authorizations and charges. I blogged about this in these two articles: Accpac Credit Card Processing and Accpac Payment Processing.

Now as we approach our next release we are including a number of enhancements to this integration. We are in the process of changing our version numbering scheme, so the next release of Sage 300 ERP will be Sage 300 ERP 2012 rather than 6.1A. However it is still the next version of Sage 300 ERP after 6.0A.

With this upcoming release we are going to add three main features:

  • Ability to capture pre-authorizations in Shipment Entry, Invoice Entry or either. Currently users can only capture pre-authorizations in Shipment Entry when items are shipped.  Many customers tell us that they would prefer to have office personnel perform the capture rather than have this done during Shipment Entry.
  • Ability to capture a number of orders from different customers in a batch rather than individually. This will streamline operations, especially in high-volume companies.
  • System will automatically ‘force’ an expired pre-authorizations without prompting the user whether they want to force a pre-authorization. “Force” is the process of doing a capture (post-authorization) for a pre-authorization that has expired. Currently there is a prompt that appears if it has expired and users have to select whether they want to force a payment.  This change streamlines operations and removes unnecessary user interaction with the software.

Capture Pre-Authorizations During Invoice Entry

This feature basically means exposing the functionality already available from Shipment Entry in Invoice Entry. Capturing a pre-authorization, really just means charging the credit card for real, so you get paid. The earliest you are allowed to capture (or charge) the credit card is when the item ships as per credit card processing rules. However due to separation of duties, in many companies the people doing the shipping aren’t the right people to process the credit card. Usually this needs to be done by a finance person and it is most convenient for them to do this when they prepare the Invoice (since the shipment has already been done).

Here is the Invoice Entry screen displaying Invoice Prepayments screen with full credit card functionality. Notice the “Capture…” button on the main form.

When you hit Charge, you get the “Capture Pre-authorization” screen:

And then you when you hit “Process Payment” it will capture the Pre-Authorization, so you will be paid.

Capture a Batch of Orders

Now, let’s look at how we will “capture” a number of orders in a batch. For any orders that have been pre-authorized, this means to really charge their credit card. To do this we have create a new Form in the Order Entry Transactions folder:

When you run this Form you get:

From this screen you specify the A/R Batch to add the transactions to (or create a new one). Then you can get a list of Orders which are candidates for charging. To be in the list, the Order must have an outstanding pre-authorization and have shipped. It is a rule from the Credit Card companies that you can only charge for items that have shipped to the customer. Select which Orders you want to capture (charge). Once you have chosen all the Orders, then all you need to do is hit the “Process” button and away it goes.

This should make it easier for companies to process a high volume of Orders.

Automatically “Force” Expired Pre-Authorizations

In the current system if a pre-authorization has expired then we put up a yes/no question when you go to capture the transaction asking whether you want to “force” it. Forcing a transaction may not work for various reasons and usually causes higher transaction fees on the transaction. This is why we put up the warning dialog, so if people don’t want the extra fees then they can avoid them.

However the feedback we have received is that this prompt is just annoying. If you are taking credit card transactions then you are willing to put up with the fees and you would like to try to get any money you can. For instance if the transaction fails because they have maxed their credit card, well if you hadn’t tried, you wouldn’t get anything anyway (or would get something like a bad check which has its own fees).

We may offer an option for this, but it seems like the consensus is that people would like the process streamlined.

Summary

The original credit card processing support that we previously added has been quite successful and we are looking to build on this by continuing to add functionality in each release based on customer feedback. Hopefully these new features will keep our Sage Exchange integration growing and as it grows we will get more feedback and enhance the integration further. Notice that sometime listening to feedback means streamlining a process or removing a feature, not just adding new bells and whistles.

Written by smist08

April 21, 2012 at 4:12 pm

Sage 300 on Windows 8

with 8 comments

Introduction

I must say that I really like Windows 7 it’s a very powerful 64-Bit operating system that is relatively fast, powerful and stable. I really liked Windows XP and I hated Vista. Now it is time to consider Windows 8 as it prepares for a pre-Christmas release. To some degree Windows 8 reminds me a lot of Vista, in that it has changed a lot of UI elements that I liked to rather bizarre elements that I hate. Supposedly new users will like them better and supposedly I will grow to like them after using them for a few months, but I’m nowhere near there at this point and rather dubious that I’ll ever like them.

So why did Microsoft make these changes that I dislike so much? Their big goal was to have one operating system that works exactly the same on phones, tablets, laptops and desktops. Phone and tablets operate using touch and laptops and desktops use a mouse and keyboard. One of the claims is that laptops will start incorporating touch and so will stop using the mouse (or track-pad), but that is yet to be seen. Windows 8 also has a dual personality, it has the traditional Windows desktop where traditional Windows applications run and then the new Metro world where phone/tablet optimized applications run (as long as they are written to the Metro standards). Windows 8 also supports ARM based devices as well as Intel based devices. If you run on an ARM based device then you only get the Metro personality, you cannot run any traditional Windows applications.

If you’ve ever used a current Windows tablet, one hard element to use by touch is menus. These tend to be very fiddly. So Windows 8 spends a certain amount of time trying to eliminate menus. The first menu they have attacked is the Start menu. Windows 8 has a new “Start Page” as pictured below:

This is then your starting point for the new Metro applications as well as the traditional Windows Desktop applications.

Microsoft UCD

Microsoft has spent quite a bit of time defending the Windows 8 Start Page against a barrage of criticism on the Web. A few blog posts from Microsoft explaining the User Centered Design methodology and thinking are: Evolving the Start Menu, Designing the Start Screen and Reflecting on your comments on the Start Screen. These make interesting reading, especially the comments at the end of the postings. Obviously the Start Menu had a number of problems besides being difficult to control via touch. It always seemed strange that you shut down your computer by clicking “Start”. Once you have many programs it becomes quite unwieldy and performance starts to suck.

These are all legitimate concerns; however the solution of the jarring switch from the desktop to this completely different screen that takes over the entire monitor seems rather an inelegant solution. Is this a case of the solution being worse than the problem? I guess time will tell. It will also be interesting to see if Microsoft takes any of all this feedback being provided and makes any changes before they release. Plus the Start Page doesn’t really address the scalability issue, instead it just introduces a lot of horizontal scrolling.

Touch Control

Once you are in the Windows desktop, things work pretty much as you would expect and you can get your work done. However when you are in the Metro or Start Page side of things, I find Windows 8 can be quite difficult to use. To keep the screens “clean” and “pretty”, there are no indicators of where you need to click or swipe to do things. Many times I had to Google to find out how to do basic operation like sign-in or close IE. Since things are oriented to touch, often when using a mouse you have to drag things right across the screen to do things which usually makes my mouse come off the edge of my mouse pad, which I find annoying. I dread to think how this works on a track pad.

The native Metro applications (including built-in things like IE) all operate in full screen mode (like apps do on an iPhone or iPad). On my laptop and desktop, I find this extremely annoying. I have large high resolution monitors and I can have quite a few Windows open and visible at once. On Windows 8, this is all gone.

Back to Windows 1.0

With the Metro apps they either operate in full screen mode or you can tile them and they stay live on the Start Page. This reminds me very much of Windows 1.0 before the days of overlapping Windows when all your open Windows were tiled:

Really back to the future for UI design there. Of course the quality of the graphics are much better now, so the picture looks better, but the idea somehow feels old fashioned.

Sage 300 Runs Fine

Here is a screen shot of running the Sage 300 ERP Desktop and the main Order Entry screen on the Windows 8 Consumer Preview. Looks and acts pretty much like on Windows 7. Notes the lack of the “Start Menu” along the bottom of the screen.

We’ve tried various things like reporting and editing data and it all seems the same as Windows 7 which is good.

Looking at this screen, how do you bring up the “Start Page” to run another program? From the task bar you can only run IE and Explorer. Here is a funny video of someone’s dad trying to figure out what to do. (I had to Google this, to find out which cryptic key to press).

Need to Fix Our Installation

The one thing that currently isn’t doing a good job is the installation of what we used to put in the Start Menu. These now end up on the Start Page and since we used to have sub-menus for things like tools, the Start Page doesn’t support this so we get a whole bunch of items (with bad graphics) messing up the page:

We’ll have to fix this up before our next release, if we can, given that Windows 8 is a bit of a moving target at the moment. Anyway we need to figure out a better way to organize our items and provide better graphics.

Summary

Is Windows 8 a step forward or a step backwards? I think it’s a step backwards for desktop and laptop users. For Microsoft, it’s a step forwards for phone and tablet users. However, is it enough to compete with Apples iPhone and iPad? Can it compete with low cost Android devices? Personally, I think Apple has established the standard for how touch works in these environments and with Android largely copying Apple, it makes Microsoft seem rather odd and not following established standards. Will users accept these? I guess the market will decide once it’s released.

Written by smist08

April 14, 2012 at 7:04 pm

Avoiding Agile Pitfalls

with 3 comments

Introduction

This article contains a number of pitfalls that I’ve run into as part of taking a large established development team from the Waterfall to the Agile Software Development Process. This involves taking a development organization with around 100 professionals in various roles including programming, quality assurance, business analysts, user centered design, product management, documentation, project managers, people managers, etc.

Through it all its best to always refer back to the Agile Manifesto as a guide to whether changes are helping or hurting:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

Some of the items discussed here are controversial and I’ve certainly have colleagues who think these are good things. So keep in mind these are my personal experiences and judgments.

Too Much Process

Generally the solution to most problems with Waterfall involves adding more process, such as additional checkpoints and oversight. There is a huge tendency to carry this sort of thinking to Agile. It is totally alien that something can work without adding back all these layers and processes. Empowering teams to produce working software rather than documents and plans is always a challenge.

Checklists

As people are new to Agile, they go through various training classes and read various books on the Agile process. To help with the learning they develop checklists, such as things to do at standup or things that need to be done to mark a story as done.

Creating a checklist of all the things that need to be done to complete a story seems like a good idea and usually has useful items like performing a code review or making sure unit tests are in place. However then blindly applying these checklists to all stories causes all sorts of unnecessary work. It also tends to set people up as gatekeeper which tends to be a bad thing. For instance a story that involves configuring some XML configuration files for a server component shouldn’t need any user centered design review, or Java code review. However when these are on the checklist, people blindly assume they need someone with these expertise to sign-off to complete the story. Or they need a special dispensation to skip the item. All of this leads to unnecessary discussion, meetings and wasted time. Worse it leads to feature trading negotiations. Say one of these reviewers doesn’t have anything relevant for this story, but want something low priority done somewhere else, and then they will “trade” their approval for this other work being done. This then subverts the product owner’s prioritization of the backlog.

In general these are symptoms of the product owners and scrum masters not having enough authority, trust or power. The scrum master must be diligently and aggressively removing check list items that aren’t appropriate for a situation. Management has to trust the scrum process and team empowerment to believe that the team will do the right thing and apply necessary done criteria, but leave off everything else.

Over-Swarming

Most agile books advocate creating a priority ordered backlog and then having teams swarm on the top items to get these most important items done first and done quickly. This is all great, but when you have a large development organization say with twelve scrum teams, then having this many people swarm on one problem leads to everyone tripping over everyone else.

I think you need to keep the whole swarming thing within one team. If you want multiple teams swarming on something, then there is something wrong with your backlog and it needs to somehow be refactored. Much more important than swarming is keeping things in the backlog independent of each other and really minimizing dependencies.

I think otherwise you get people thinking along the lines that if it takes a woman nine months to have a baby and Product Management has promised that baby in one month then you need to swarm on the problem and have nine mothers produce the baby in one month.  Generally be careful that you aren’t really just slowing things down and distracting the people doing the real work by throwing resources at a problem.

Not Decomposing Problems Enough

There is a huge tendency for an organization that once did waterfall, when switching to agile, to keep doing waterfall, just fitting the tasks into sprints. A clear telltale for this is when they have a coding story that takes one sprint and then a QA story for the next sprint. This is really an attempt to separate programmers from QA and not have them working together on a story. The programmer is just throwing the code over the wall (or sprint end) to the QA just like the good old waterfall days.

The usual excuse for this is that the story is intrinsically to complicated or intertwined to break down and must be done over multiple sprints. Don’t believe this. Every case I’ve looked into, that did this, could be quite easily decomposed into multiple stories. It’s just a matter of adjusting your thinking to work that way.

I don’t like forcing rules on scrum teams, since this undermines their authority to do what is best for a story, but this is the one case where I find this necessary. If this isn’t done then the stories just get larger and larger until waterfall is back to being the norm.

I really emphasize releasable state, that these big stories are taking you out of releasable state and that this is a very bad things.

Source Control

Another problem I’ve seen is that during sprints, teams are checking things madly into the main source tree so that the continuous build and integration system picks them up and their QA can immediately test what they have done. However this can be disruptive to other teams, if one team is breaking things mid-sprint.

If this starts happening, more use of branch/merge needs to be done. With Git this is very natural. With Subversion, this isn’t too hard either. In any case each team needs to cognizant of not disrupting other teams as they race to complete their own stories. Modern tools have all the features to facilitate this; you just need to educate people to use them. Basically if your changes are breaking other teams, then work in a branch and don’t merge until your testing is complete. With test driven development make sure the testing is in place and working before committing to the trunk.

Summary

These are some of the problems that I’ve run into with Agile. I find that as long as teams stick to the intent of the Agile Manifesto, then they can be extremely productive and produce very high quality work. The key is to empower the teams, stick to the basic Agile principles and avoid too much process or extra bureaucracy.

Written by smist08

February 18, 2012 at 8:30 am

Unit Testing Web UIs in Sage 300 ERP

leave a comment »

Introduction

Unit Testing is a technique to test individual units of source code. Usually in the Java world this means a set of tests to test an individual class. The idea is to test the class (or unit) in complete isolation from the rest of the system, so calls to other parts of the system would be stubbed or mocked out. Unit test are typically written by the software developer to perform white box testing for their class. Unit testing is only one sort of testing there would be other types of testing for various integration testing, user testing, load testing, manual testing, etc.

The goal of unit testing is to show that the individual parts of a program are correct and then to deliver the following benefits:

  • Establish a well-defined written contract of what the code must do.
  • Find problems early in the development cycle.
  • Facilitate change and refactoring since you can rely on the unit tests to prove things still work.
  • Simplify integration since you know the building blocks work correctly.
  • Provide documentation, the unit tests show examples of actually using the class being tested.

In Extreme Programming and Test Driven Development, the unit tests are written first and then passing the unit tests acts as an acceptance criterion that the code is complete.

JUnit

JUnit is a unit testing framework for Java programs. It is integrated into most IDEs like Eclipse and it is easy to run the tests from build tools like Ant. Usually in Eclipse you have a test tree with all the test classes, usually a test class for each real class and the test class containing all the tests for the real class. JUnit support a number of annotations that you put on the methods in the test class to specify which ones are tests to run as well as optionally specifying any order or dependency requirements (the best unit tests run completely independently of each other). Then it adds a number of methods to call to test for pass and failure of the tests (via various forms of assert) and to report the results back to the test framework.

GWT

The Google Web Toolkit (GWT) supports unit testing with something called GWTTestCase which bridges GWT to JUnit. If you want to interact with GWT controls then you have to use this. However it’s relatively slow. You want unit tests to run blazingly fast. A unit test should only take a few milliseconds to execute. If it takes longer then it won’t be run often. You want it so your unit tests can be run every time you compile and so they don’t slow you down as you work.

We use GWTTestCase to test our extension of the GWT CellTable widget. These tests take 10 minutes to run. This is way too slow to expect them to be run on a regular basis. Hence they don’t provide the immediate feedback to a developer working on this widget that we would like.

Fortunately since in GWT you are writing all your classes in Java, if you structure your program correctly you can have most of your classes not interact with GWT or easily mock the GWT part of it. Then you can use JUnit directly to unit test your program.

MVC

One of the key goals of us enforcing MVC design patterns on our developers creating UIs is to facilitate good unit testing. This way if most of the code that needs testing is the Model and Controller, these can be tested with JUnit and we don’t need to use GWTTestCase. This greatly simplifies unit testing and greatly improves the productivity and speed of the tests. Usually our View part is very small and mostly implemented in the SWT framework, so the unit testing is then in framework unit tests (many of which are GWTTestCase), and not in the actual UIs.

EasyMock

EasyMock is a library that can dynamically “mock” classes that aren’t part of the test. With unit tests we only want to test the one class at a time and we want the tests to run quickly. We don’t want to require a particular database be in place and then have to wait for the tests to run as it makes database queries. We want the tests to run in milliseconds and we want the tests to run in spite of what might be happening in the rest of the system. To this end the test framework needs to replace the real classes that the class being tested calls and interacts with, with something appropriate for testing. These test replacement classes then have the ability to return unexpected or rare results such as network disconnection errors, or other rare hard to setup type of scenarios. One way to do this is to write “stub” classes that replace the real classes and then only have code to do what is required for the testing. Writing and maintaining stub classes is difficult since you need to keep them in sync with the classes they are stubbing; keeping these correct can become a major amount of work.

EasyMock offers an easier way. It generates the classes on the fly using Java’s proxy mechanism. This way the class interface is always up to date since it is generated from the real class. You first run in a “recording” mode where you record what you want the various methods to return to the test cases. Then you switch to “play” mode and run the tests.

For this to work effectively, you must design classes so classes they use can be mocked. This usually means creating them externally and then passing them to the constructor rather than creating secret internal instances. Generally this is good object oriented design anyway and usually difficulty in constructing unit tests is a “code smell” that something is wrong with the design.  For simple utility classes or simple classes in the same package, it’s often easier to leave them in place rather than mock them, then they get a bit of testing as well and as long as they run quickly, shouldn’t be a problem.

Example

Below is one of the tests from the BOM Detail popup form in Order Entry. This shows the structure of a JUnit test and shows how to mock a big class (namely the Model) that this class interacts with. Notice that the Model is created externally to this class and passed into the constructor, so it is easy to mock. First we record what the mock class should do and then we switch to replay mode to do the actual test.

public class NestedViewBOMDetailsCallbackTest
{
private static final int TEST_PARENT_COMPONENT_NUMBER = 42;

/**
* Tests that the callback’s onPopupClosed handler will filter on the child
* BOMs of the parent component number that was passed in at callback
    * creation time.
*/
@Test
public void testOnPopupClosedFiltersOnChildrenOfCallerParentBOMNumber()
{
// Set up the mock model and the expected method calls that should be
// made on it by the class under test.
OEDocumentBOMDetailsModel mockModel = EasyMock.createMock(OEDocumentBOMDetailsModel.class);
mockModel.filterOnChildBOMsOf(EasyMock.eq(TEST_PARENT_COMPONENT_NUMBER), EasyMock.isA(EntryFilterCallback.class));
EasyMock.expectLastCall();

// Tell EasyMock we’ve finished our setup so that any time we interact
// with the mock, EasyMock records what calls were made on the mock.
EasyMock.replay(mockModel);

// Call the class under test to exercise the method under test.
NestedViewBOMDetailsCallback callback = new NestedViewBOMDetailsCallback(mockModel, EasyMock
            .createMock(BOMDetailsView.class), TEST_PARENT_COMPONENT_NUMBER);
callback.onPopupClosed(null);

// Check that the expected method calls on the mock happened when we
// exercised the method under test.
EasyMock.verify(mockModel);
}
}

Summary

Unit testing is an effective form of testing that does find quite a few bugs and helps keep code running smoothly as a system is developed and refactored going forwards. Sometimes the discipline of unit testing forces programmers to ensure they fully understand a problem before coding resulting in a better solution. Certainly this can’t be the only form of testing, but it is an important building block to a quality software system.