Stephen Smith's Blog

All things Sage ERP…

Archive for April 2011

Skills for Sage Web Development

with 14 comments

Last week I blogged on the Sage Developer training at Sage Summit. I’ve since received a few inquiries about what skills are needed in this area. So I thought I’d blog this week on desirable skills in this area along with a number of references where you can learn online.

All our Sage ERP products including Sage ERP Accpac, Sage ERP MAS 90/200, Sage ERP MAS 500 and Sage ERP X3 are somewhere on a Web journey, whether moving the entire product to the Web or adding integrations with various Connected Services. More and more ERP installations are being done in conjunction with a CRM installation whether SageCRM or Sage SLX and the CRM products are all well into their own web journeys. We are integrating in mobile applications that are all based on Web technologies like HTML5. We are offering all our products in both on-premise and cloud based configurations.

As we go down this road what skills should you master as a Development Partner or a Business Partner? What skills should you be looking for, as you hire new consultants? This blog posts looks at some of the technologies that we are using in our various products, where to learn about them and what skills are required. Remember that the Web is a fast changing world and the best skill to have is the ability to learn new things. To be enthusiastic about change and excited about the possibilities all these new technologies open up.

This blog posting is really about in-depth technical skills for Developers and Advanced Consultants. However there are also many new business metaphors to learn. For instance: does your company really embrace Social Media? More and more business is done as a result of interactions on Twitter, LinkedIn, Blogs, Facebook, Yammer, Quora, etc. How to get leads and do business in a much more cloud based web world is a whole other topic. But everyone should be trying their best to fully embrace the new Social business world.

This blog posting is quite forward looking and isn’t a commitment to any product roadmaps or commitment to inclusion of any given feature or technology to any specific products or versions. Sometimes learning a specific technique may not have immediate value, but often learning one thing makes learning a related similar thing easier later on.

I’ve blogged quite a bit on all these topics in the past and a good list is available here. If you are an Accpac DPP member then make sure you check out the Accpac DPP Wiki.

SData

SData is Sage’s standard RESTful web services interface to all Sage applications. The main hub for information on SData is here. The definitive book on RESTful Web Services is by Leonard Richardson and Sam Ruby from O’Reilly Press.

Generally the good skills to have for SData are a working knowledge of RESTful Web Services and XML. If not RESTful, then SOAP web service knowledge would be second best.

Generally over time SData will become the preferred API to integrate to all Sage products.

SWT

SWT is the Sage Web Toolkit which is based on the Google Web Toolkit (GWT). This toolkit is SData aware (meaning all controls can be bound to SData feeds or properties). It supports drag and drop and has a visual UI designer.

For GWT, the Google web site has lots of good information. Plus Google is assembling an online GWT book. There are also many commercial books on GWT.

When programming with SWT you do all your coding in the Java programming language. So a good knowledge of Java is very helpful. There are many on-line resources, books and training courses on Java. Like any programming language you learn the most by doing, this is the only way to really learn. Personally, I like the book: Big Java by Cay Horstmann. Another popular book around the office is Effective Java by Joshua Bloch.

Although you do all your programming in Java, it still helps to know something about CSS, HTML and JavaScript. Again there are many books and courses on these as well as many on-line resources. JavaScript is going to become a bigger and bigger part in doing customizations for all Sage products.

Cloud

As our products move to the cloud, whether in the manner of www.accpaconline.com or as true SaaS solution, if development partners want to have their products hosted in the same manner then they will need to use our ERP product’s SDK and APIs and follow our best practices. Otherwise we won’t be able to host your application. Generally you use the same tools and techniques as on-premise, but with a few restrictions. For instance since everyone runs the same programs, then source code customizations can’t be allowed. Knowing something about the main Cloud platforms can only be a help like Amazon, Azure or Rackspace.

Security

As everything becomes connected to the Internet security and privacy become bigger and bigger concerns. I’ve blogged a couple of times on this here and here. These will give you a starting point of what to look into.

Common Components

Many of our ERP packages use the same development tools or rely on the same server components. Here are a few examples that you may want to become familiar with.

Eclipse: is a programming IDE that is used both by Accpac and MAS-90/200 for program development. Both products include Eclipse plug-ins in their SDK and have documentation oriented to working in Eclipse. There are many on-line resources and books about Eclipse available.

Tomcat: Apache Tomcat is a Java application server that is used by both Accpac and SageCRM. There are many good online resources for Tomcat along with many books available. However the configuration and setup of Tomcat is handled, hopefully transparently, by our installation program, so the hope is that you don’t need to really know that much about it.

IIS: Most of our products use IIS if they require a Web Server. Accpac uses IIS to server up all static content as part of it’s SWT/SData framework. SageCRM also uses IIS and largely operates as an IIS extension. Again there are many on-line resources for IIS and many books on the topic. There is some trickiness to IIS, in that it is quite different on each version of Windows and can be confusion if you are jumping between versions.

Jakarta: Jakarta is the common name for the Apache Tomcat Connector to IIS. This component runs as an ISAPI filter, scans all requests to IIS and then intercepts the ones required by Tomcat and forwards them on. If we do our job right this should be configured by our installation program and you shouldn’t need to know about it.

Some Oldies but Goodies

As we move to the Web, don’t lose sight we are still built on the same technologies as before and a good foundation in these is a key to success.

SQL Server: All the Sage mid-market ERP products and the associated CRM products now support SQL Server. When integrating multiple products together like ERP, CRM, HR and FAS then it’s best to use SQL Server since this is a common supported database across all of these. There are many on-line resources, courses and book on SQL Server. Having a good working knowledge of SQL server is a prerequisite for all Certified Installers.

Crystal Reports: All the Sage mid-market ERP products use Crystal Reports as their main reporting engine. Crystal Report’s website, along with a number of books, courses and online forums are good sources of information.

Windows Server:  A good knowledge of Windows Server (especially 2008R2) is a good benefit to any partner working with any Sage product. Windows Server has now been around for a long time so there are many on-line resources, courses and books that provide really good coverage. Especially what is effected by security and group settings which can have a big effect on how applications function.

Summary

Whether hiring new employees, evaluating new consultants or brushing up on your own skills, I hope this blog gives a few ideas of areas that would benefit you when working with Sage ERP over the coming transition to the Web and Cloud Computing.

Written by smist08

April 30, 2011 at 3:22 am

Posted in Business

Tagged with , ,

Developer Training at Summit

with one comment

Sage Summit is being held on July 10-15 at the Gaylord National Hotel and Convention Center in Washington DC. During the conference we will be holding developer training for Sage ERP MAS 90/200, Sage ERP MAS 500 and Sage ERP Accpac. Most of this training will take place on the Monday and Tuesday during the partner half of the conference.

We will have a number of key developers attending the conference representing all three product lines. Besides offering the developer training, they will be able to network with many Business Partners and Customers to learn first-hand how people use our products and what we can do to help them in their work. For Partners and Customers this is a great opportunity to provide un-filtered feedback to the developers actually working directly on the products.

We will be offering a number of Developer Track Sessions for each Sage ERP product line. In addition Developers can sign up for one-on-one sessions to discuss specific product problems or concerns. So hopefully through the training classes, the one-on-one sessions and general networking, Development Partners will be able to get all your questions answered and concerns heard.

All mid-market ERP solutions rely heavily on a strong and vibrant Developer community to produce additional Accounting Modules as well as to provide integrations into all sorts of complimentary products. Sage’s success is very much tied to the success of the Independent Developer Community and we are trying hard to strengthen that at Summit.

Disclaimer: The agenda for Summit is still evolving, so there may be some changes before being finalized.

Sage ERP MAS 90/200

For MAS 90/200 the key initiative is to ensure all Development Partners are taking full advantage of the SQL Server version. The SQL Server version was just released and we are looking for Developer Community support to really make this platform successful. Then for people that integrate to MAS 90/200 we have two sessions on using the new Business Framework as well as integrating with real time data views.

  • Programming to MAS 200 SQL Standards (Part 1):
  • Programming to MAS 200 SQL Standards (Part 2):

Are you ready to move your modifications to the MAS 200 SQL platform?  Learn some of the programming tricks you’ll want to incorporate into your code to optimize performance in the SQL environment.  This two-part session will cover a number of MAS 200 SQL developer partner topics with a particular focus on developer standards, code optimization and ProvideX language enhancements.

  • Using the Business Object Interface – Beginner (Part 1): Learn how to use the updated Business Object Interface with MAS 90, MAS 200 and MAS 200 SQL.  This is the first half of a two-part session which will cover the basics of the interface, focusing on simple implementations from the standpoint of a development partner or system integrator.  The second session (BOI Advanced) will introduce some advanced techniques and code examples to take your integration with MAS to the next level.
  • Real-Time Data Views (and other cool tricks) using SageCRM and MAS 90: Want to bring your integration between MAS 90 and SageCRM to the next level?  Learn how to customize real-time data views in SageCRM to link your data with MAS 90 for queries, reports, promotions, etc.  See the latest enhancements to the SageCRM dashboard and learn how to customize it to meet your specific customer’s needs.
  • Using the Business Object Interface – Advanced (Part 2): This is the second session for the Business Objects Interface. This session is where we will be introducing some advanced techniques and code examples to take your integration with MAS to the next level.

Sage ERP MAS 500

Although MAS 500 isn’t currently in the process of a major technology transformation, it is still a fully supported strategic product. For the MAS 500 development community this means you have a stable platform to build on. This is an ideal opportunity to improve your development skills and to add many new features to your MAS 500 products or to develop new ones.

  • .Net with MAS 500: Have you ever wanted to write a WinForms application that integrates with MAS 500? How about a Windows Presentation Foundation based application? Learn how at this session. See how .NET interacts with the session objects and other controls like; Security, Login, Database Connection, Licensing, Lookup and Selection controls.
  • Customization: This class will show how to use VBScript and basic customizer controls to provide customized utilities to help fit the Sage MAS 500 application to the specific needs of the user. It will also cover how to create custom tables and tie them to Sage MAS 500 using customizer to help expand the systems resources as needed.
  • API’s and DI: Data Import uses them. So does Data Migrator, and the application imports. What are they Application Programming Interfaces, or APIs. Find out how you can use the same interfaces to safely integrate your application data into MAS 500. We will also show you how to use Data Import Manager to load data into MAS 500.
  • MAS 500 Development: New to MAS 500 development or do you need a refresher course? Check out this session to learn about MAS 500 development best practices. We will be looking at developing extensions to MAS 500 in .NET or VB, what APIs are available for use by MAS 500 integrators? What options are available for customizing MAS 500 and which is the best one? What is the right way to import data into MAS 500? These questions and more will be answered for you at this session.

Sage ERP Accpac

The main emphasis of the Accpac developer training is to help ISVs move their products from the Accpac 5.x Visual Basic UI framework to the new Accpac 6.x SWT Web Framework. However we do have our usual session on the introduction to the entire SDK and after a hiatus we have a session on View (or Business Logic) programming.

Sage ERP Accpac – Accpac SDK for Developers – Part 1:
Sage ERP Accpac – Accpac SDK for Developers – Part 2:
Sage ERP Accpac – Accpac SDK for Developers – Part 3:
Sage ERP Accpac – Accpac SDK for Developers – Part 4:
Sage ERP Accpac – Accpac SDK for Developers – Part 5:

Through a series of hands-on labs, these sessions will show you how the Accpac 6.1 SDK is used to create web screens for Sage ERP Accpac and integrate them to the Accpac program views. Multiple labs will reinforce your knowledge of the Eclipse IDE, Sage Web Toolkit (SWT) and SData web services.

Sage ERP Accpac SDK – Writing Accpac Views: Do you need to create a custom Accpac Views with the business logic for your applications? This session outlines the code libraries, templates, header files  and tools available in the Accpac SDK that greatly simplifies the task for creating a custom view.

Sage ERP Accpac SDK – UI Porting Tool: Join us at this session and learn how the UI Porting Tool can save you time migrating your existing VB screens to the Web. We’ll take an existing Accpac VB screen and walk through the process to convert this to a Web based form.

Sage ERP Accpac SDK – Implementing User Assistance: Join us in this session as we look at the different tools used by Accpac for creating documentation, Videos, Help and translations for Accpac 6.1

Sage ERP Accpac SDK – Welcome to SData (Part 1):
Sage ERP Accpac SDK – Welcome to Sdata (Part 2):

What is SData? It is the mechanism used by Sage applications to integrate with each other and external applications including mobile devices. At this session you will learn how SData is used in the Accpac 6.1 framework and see how it is implemented with other applications such as SageCRM.

Sage ERP Accpac SDK – Using Open Source tools for Accpac development: Take a look behind the scenes at the many Open Source tools used by the Accpac development team. Learn how to leverage these tools to code, build, test and document your Accpac 6.1 applications.

Sage ERP Accpac SDK – Introduction: Are you interested in customizing or integrating your applications with Sage ERP Accpac? This session introduces you to the Accpac SDK and the architecture that allows you to add value to the Accpac product family. This is an overview of the entire SDK including Business Logic, Reporting, External APIs and User Interfaces.

Summary

Often Sage development partners are attending Summit to have a booth in the trade show and to perhaps give a session or two. With the Developer training track, Development Partners can also receive additional training on the products they develop for and network with key Sage developers.

Update 2011/05/27: We just added a session: Sage ERP Technology Roadmap on Sunday, July 11 at  1pm – 2:30pm.

Session Description: Join Sage ERP Product leaders who will be discussing the technology evolution of Sage ERP product lines. This session will cover our product journey to the cloud; technologies being used to deliver rich web experience; developments on the Connected Services front(joining On-premise applications with the Cloud enabling us for web and mobile services); what kind of tools/technology/skills are needed to integrate, customize our products in the web world; and collaboration occurring on common components. There will also be time set aside for open dialogue. This session is for Sage ERP Development Partners only.

Written by smist08

April 23, 2011 at 5:02 pm

Test Driven Development at Sage

with 7 comments

As the Sage ERP development teams continue their journey into Agile Development, we are looking to even more fundamentally change the way we do things. We have adopted the Scrum methodology and are now operating in two week sprints. Within those sprints we take user stories and complete them to very exacting “doneness” criteria. The goal is to have a releasable product at the end of each sprint (releasable in a quality context, but not necessarily functionally complete). So how do we perform all the development, QA, UCD and documentation within small two week sprints? What we are looking to do is to truly use a Test Driven Development Process. This blog post is intended to outline what some of our Agile Teams are already doing and what the rest are looking to adopt soon.

We switched from using the Waterfall Software Development Methodology to the Agile Methodology which gave us a number of good gains. The main one being containing bugs to sprints, so we don’t end up at the end of a project having to fix a huge number of bugs in a QA or regression phase after code complete. However there is a lot of temptation for people that are used to the Waterfall Methodology to start doing mini-waterfalls with-in the Agile sprints. Then for each story, they would design – code – QA –bugfix the feature as a little waterfall within the Sprint. In Agile the QA and programmer should work together and there should be no separation of work like this. As a first step to stopping this we shortened our Sprints from three weeks to two weeks. This made it very hard for people to do mini-waterfalls within the sprint since there just isn’t enough time.

Another goal of Agile Development is to make extensive use of automated testing so that rather than having to continually manually test the product to ensure bugs aren’t creeping in, instead there are automated tests that run on every build of the product to ensure this doesn’t happen. There can be a perception that adding automated tests takes too much time and slows things down. This doesn’t take into account that finding and fixing bugs takes way more time and resources.

To solve these problems we have adopted Test Driven Development within the Agile Sprints. Here we reverse the development process to some degree and write the tests first. This has a number of benefits:

  • QA team members don’t need to wait for programmers to finish coding before they have anything to do.
  • Automated tests are developed as part of the process and are fundamental to the process.
  • This forces the story to be well defined up front since the first thing you need to do is define how to test it.

Immediately the biggest gain we got from this approach was far better defined stories with far better acceptance criteria. This saved a lot of re-work where programmers implemented something without fully understanding the story or acceptance criteria and then the story would fail because of this.

The diagram below shows the steps we follow executing stories during an Agile Sprint (developed by the Accpac Super team):

Notice how we write the test cases first as we investigate the story. Only then, after this is all understood and reviewed do we actually do the implementation.

A common problem with Agile is that programmers will code quite a few stories and then give them to QA near the end of the sprint. This overwhelms the QA testers and doesn’t give them sufficient time to do all their testing. With this new approach, all the test cases are written up front and well defined, then if too many things end up at the end of the sprint, other people like Business Analysts, Writers or Programmers can help execute the test procedures to complete the testing.

So far this sounds fairly manual. But there are good tools to help automate this process.

Fit and FitNesse

FitNesse is a Wiki and an automated testing tool. It is oriented around doing acceptance testing rather than unit testing. A lot of times programmers look at this tool and dismiss it as another unit testing framework. The difference is subtle but important. For the programmer Fit is similar to unit testing frameworks, the work the programmer does is very similar to writing unit tests. However you are more writing an API for writing tests than the actual tests. Then FitNesse gives a mechanism for QA (or other non-programmers) to feed input into these tests to cover all the necessary test cases. In the Wiki you document the acceptance criteria and enter all the tests to prove that the acceptance criteria are met. Then since these are automated tests they will be run as often as desired (perhaps on every build) to ensure this story’s acceptance criteria continue to be met.

We use EasyMock to be able to test components in isolation, so modules can be tested without requiring the whole system be present. This makes setting up the tests far easier, makes it very easy for them to run in the build system environment and allows them to run very quickly (since things like database access are mocked).

We’ve found this to be a very powerful tool to document and produce automated tests. It allows QAs who know the product and what it should do, to create automated tests without having to be programmers. You can probably find standalone tools that do the individual parts of FitNesse better, but the power comes from how they work together and enable the team to create the tests and automate the acceptance testing.

Jenkins

Jenkins (previously known as Hudson) is a continuous integration and build tool. It uses Ivy to define all the dependencies between all the modules in a system. It then scans source control (we use SubVersion) to see when files change. When it detects that a file has changed it will re-build any modules that it is part of and then based on the Ivy dependency map, it will build any modules that depend on these and so on. As part of this process it can run any associated automated tests, run tools like FindBugs or Emma as well.

The goal of this then is to report back to a team as soon as possible that something they have done has broken something else. Bugs are cheaper to fix the sooner they are found. If they are found right away while some one is working on the offending code, they can usually be immediately and easily fixed. Combining Jenkins and FitNesse is a great way to accomplish this.

Regression Testing

In addition to unit testing and automated acceptance testing, we also run full automated tests on the whole system.

The Accpac team uses Selenium. Selenium drives the system by driving the Browser. It uses the automation interface to the Browser to drive our system. This then looks to the system like an end user and all parts of the system are used. I blogged previously about our process of moving the VB screens to the Web, here I described our Porting tool to initially move the VB screens to SWT. When we generate the SWT project, we also generate a number of Selenium tests for each screen automatically. So as a starting point we have automated Selenium tests to drive the basic CRUD functionality of each screen.

For non-Web products like MAS, Peachtree, Simply Accounting and Peachtree, we use SilkTest for automated system testing. We have developed fairly extensive suites to tests to provide good automated coverage for all these products.

To the Cloud

What we are really striving to do is that after each Agile Sprint we are in releasable state. This means that we could post the result of each Agile Sprint to a hosted site where real customers are running our software. These customers would get the benefits of whatever improvements we made during the Sprint with no worry that we have broken anything.

Whether we actually ever do this is a separate question, but we are striving towards having our software always in a releasable state, so we can make this decision based on the business and market needs. The trend in the software industry is towards frequent smaller releases rather than giant infrequent releases. Generally this leads to a much faster pace of innovation (just look at how fast Google Chrome is evolving) and provides much faster customer feedback into the development system resulting in software that matches customer needs far better.

Although most of our competitors aren’t operating like this yet, eventually they will or they will be replaced by companies that do. We want to ensure that Sage can develop software as efficiently as anyone and that we have top product quality. We also want to ensure we are competitive and can innovate at a very competitive pace. Hopefully you will see this over the coming product releases.

Summary

We are still learning about effective test driven development at Sage, but already the teams that are pioneering its use are seeing great results. We’ve done automated regression testing with tools like Selenium and SilkTest for some time now and seen good results, now it’s a matter of taking it to the next level. The goal being to shorten our development/release cycles while increasing product quality at the same time.

Written by smist08

April 16, 2011 at 5:31 pm

Accpac A/R Database Structure

with 6 comments

This blog posting will be looking at the structure of some of the parts of the Accounts Receivable module of the Sage ERP Accpac product. You can find the structure of the individual tables in the Accpac Application Object Model (AOM) which you must view in IE. However this doesn’t tell you how the tables are related or how data flows from one table to another as you post various transactions. There are 103 tables in A/R, so obviously we can’t describe each in a single blog post. So we’ll just look at a few interesting cases. Understanding these relationships can be especially important to people writing sophisticated custom reports or BI Views.

The main types of database tables in A/R can be categorized as:

  • Setup
  • Master Data
  • Data Entry
  • Processing
  • Periodic Processing

Setup Tables

Setup tables tend to be fairly simple and their usage can be easily discerned from the AOM. These include things like ARDUN (Dunning Messages), ARRBC (Billing Cycles), ARRDC (Distribution Codes), ARRTA (Terms Codes) and such.

Master Data

The main master data table in A/R is the Customer table ARCUS.  ARITH (Items) is another master data table. The A/R Customers table is sync’ed with the companies table in SageCRM if the integration has been activated. The A/R Customers Optional Fields table (ARCUSO) is a detail under the ARCUS table. A detail means that there are multiple records for each ARCUS record. Some other tables are closely related to the ARCUS table and linked to it, namely:

  • ARCSM                 AR0022                 Customer Statistics
  • ARCSP                   AR0023                 Ship-to Locations
  • ARCMM               AR0021                 Customer Comments

Master data tables are like setup tables, but tend to contain the main data for the application, as opposed to a few records that configure the application.

Data Entry

Invoice Entry is the main set of tables for holding A/R Invoices. There is a table for the batch and then a table for the headers (there are multiple headers per batch) and a table for the invoice detail lines (again multiple per header). There are other detail views shown below where indentation indicates that there are multiple records for the table above.

  • ARIBC                    AR0031                 Invoice batches
    • ARIBH   AR0032                 Invoices
      • ARIBD                   AR0033                 Invoice Details
        • ARIBDO                                AR0401                 Invoice Detail Optional Fields
    • ARIBS                    AR0034                 Invoice Payment Schedules
    • ARIBT                    AR0035                 Invoice Detail Comments
    • ARIBHO                                AR0402                 Invoice Optional Fields

Receipt and Adjustment Entry is the set of tables for holding both receipt and adjustment batches. The first field of each batch record indicates the batch type.

  • ARBTA                  AR0041                 Receipt/Adj. Batches
    • ARTCR   AR0042                 Receipts/Adjustments
      • ARTCP   AR0044                 Applied Receipts/Adj.
      • ARTCU  AR0045                 Adjustment GL Distributions
      • ARTCN  AR0043                 Miscellaneous Receipts
      • ARTCRO               AR0406 Receipt/Adj Optional Fields

Refunds are the set of tables that refund documents are entered into.

  • ARRFB                   AR0140                 Refund Batches
    • ARRFH  AR0141                 Refund Entries
      • ARRFD  AR0142 Refund Details
        • ARRFDJ AR0145                 Refund Detail Jobs
    • ARRFHO               AR0143                 Refund Options Fields

Posting

Posted documents end up in the following tables:

  • AROBL                  AR0036                 Documents
    • AROBS  AR0037                 Document Sched Payments
      • AROBP  AR0038                 Document Payments
      • AROBLJ AR0200                 Open Document Details

When you post documents you get the following general data flows.

Invoices:

  • ARIBH AROBL
  • ARIBS → AROBS
  • ARIBD → AROBLJ

Receipts:

  • ARTCR → AROBL
  • ARTCP → AROBP

Note that for each ARTCP record you get 2 AROBP records. One attached to the Invoice’s AROBL which is positive to reduce the invoice and one attached to the Receipt’s AROBL which is negative to reduce the receipt.

Example

Let’s look at an example where we enter and post an Invoice and then enter and post a Receipt for that Invoice. Sometime it helps to do small transactions in Accpac and then look at the records produced in a database tool like the SQL Server Management Studio.

First let’s enter an Invoice, here are the main Invoice tables showing the values of a few key fields:

ARIBC    Batch     Batch Total               Number of Entries
         36              .44                        1

ARIBH   Batch     Entry     Customer            Document
        36           1      1100                IN0000012

ARIBD Batch       Entry     Line        Item         Quantity         Price
       36           1       20          CA-78           1             2.25

Now we post the Invoice which generates the following AROBL/AROBLJ records:

AROBL  Customer       Document Batch Entry Invoice Amount
        1100          IN0000012 36   1        2.44
AROBLJ Customer Document Line       Item      Quantity
        1100    IN0000012           CA-78       1

Now let’s enter a receipt for this Invoice:

ARBTA  BatchType     Batch    Entries          Total
         CA           40         1             2.44
ARTCR Type Batch     Entry Check#     Customer
      CA     40       1     1111       1100
ARTCP Type Batch       Entry Line     Customer Document    Amt
      CA    40           1   20       1100      IN000012   2.44

Now we post the Receipt applying it to the previous Invoice and get:

AROBL  Customer      Document Batch Entry   Invoice Amount
        1100         PY000077  40   1         -2.44
AROBP  Customer     Document Line Check#           Amt
       1100         IN0000012      1111           -2.44
       1100         PY0000077      1111            2.44

Audit Files

Below are the main audit files that contain the A/R Audit History:

  • Posting Journals/GL Transactions: ARPJS → ARPJH → ARPJD
  • Item Sales History: ARITH → ARITS
  • Customer Statistics: ARCUS → ARCSM

Statements

Statements are generated by the ARSTMT processing View (AR0077), this isn’t a table but it generates the data for a statements run. The data that it generates are in the following tables:

  • ARSTRUN                            AR0110                 Reprint Statement Header
    • ARSTCUS             AR0111                 Reprint Statement Customers
      • ARSTOBL              AR0112 Reprint Statement Customer Invoices
        • ARSTOBP             AR0113 Reprint Statement Customer Receivables
    • ARSTNAT             AR0114 Reprint Statement NAT Customers

This data is only cleared by Clear History. When you generate the report the View returns the STMTSEQ which you have to set as a report parameter to get the correct statement.

Aging

Aging data is generated by the ARAGE (AR0055) Processing View. The data is generated each time aging is run and cleared the next time aging is run. To run Aging: set fields with options and call .Process on ARAGE. When done copy the AGESEQ field from the View to the AGESEQ report parameter. The tables that are populated are:

Aged Trial Balance: ARSLCUS → ARAGED

Statistics

Customer statistics are stored in the ARCSM table which is a detail of the ARCUS table:

  • ARCUS → ARCSM

Once collecting statistics, do not change the accumulate statistics options as this will mess up the calculations since we don’t refactor already collected statistics. But if you do, they will eventually flush through the system.

If creating summary/dashboard type reports then it’s easier to get the data here than to add up totals from the posted transaction tables.

Searching Based on Document Number

A/R is document centered, given a document number; you can retrieve all the relevant information from Sage ERP Accpac.

For example, given an invoice IN0000001:

To find out its current balance (source and functional), original balance (source and functional), document date, discount, due date, etc:

  • Search AROBL table by the index IDINVC.

To find out the Invoice entry data

  • Search AROBL by the index IDINVC. The CNTBTCH/CNTITEM (BATCH and ENTRY) fields of AROBL point to the corresponding record(s) in ARIBC/ARIBH/ARIBD (BATCH/ENTRY are the primary key fields for those tables), which store the invoice entry.

To find out all the payments or adjustments against IN0000001:

  • Search AROBP by the IDINVC index, and you will find all the payments against this document.

To find out the G/L entries for this document:

  • First look up AROBL by IDINVC index to find out the posting sequence#, batch#, entry# for this document, and then search ARPJH table on the primary key  (TYPEBTCH,POSTSEQNCE,CNTBTCH,CNTITEM). TYPEBTCH is ‘IN’ for invoice, ‘AD’ for adjustment, ‘CA’ for payments. And the fields GLBATCH/GLENTRY in ARPJH point to the record in GLJEH.

Summary

The database schema structure of the Accpac Accounting modules can be quite complicated, but hopefully this article gives you an idea of how the main flow of data processing works in A/R.

Written by smist08

April 9, 2011 at 5:13 pm

Day End Processing

with 6 comments

Sage ERP Accpac Day End Processing is a rather simple form in I/C Periodic Processing. It only has Process and Cancel buttons and you are expected to run it after the close of business every day. But what does it do? The online help states:

Use this dialog box to:

  • Update costing data for all transactions (unless you chose the option to update costing during posting).
  • Produce general ledger journal entries from the transactions that were posted during the day (unless you do item costing during posting or create G/L transactions using the Create G/L Batch icon).
  • Produce a posting journal for each type of transaction that was posted.
  • Update Inventory Control statistics and transaction history.

Day End Processing also performs processing tasks for the Order Entry and Purchase Orders modules, if you have them:

  • Processing transactions that were posted during the day in Order Entry and Purchase Orders.
  • Activating and posting future sales orders and purchase orders that have reached their order date, and updating quantities on sales order and on purchase order.
  • Removing quotes and purchase requisitions with expiration dates up to and including the session date for day-end processing.
  • Updating sales commissions.
  • Creating batches of Accounts Receivable summary invoices and credit notes from posted Order Entry transactions.
  • Deleting completed transaction details if you do not keep transaction history.
  • Updating statistics and history in Order Entry and Purchase Orders.

Below is the flow in and out of Inventory. All of these transactions affect costing and generate sub-ledger transactions.

The main purpose of Day End is to move a lot of processing away from data entry. Generally in a large Accpac installation you will have hundreds of people entering Orders, Invoices, POs, etc. They need to get their work done in the most efficient way possible. A person entering Orders from the CRM system doesn’t want to have to wait for A/R and G/L transactions to be created every time. What they care about is entering their Orders as quickly as possible. As a side benefit, Day End can batch all the transactions together, so rather than each Order creating a single G/L Batch, these can all be combined reducing the number of documents downstream.

However there are a number of misunderstandings and confusion about Day End. This blog posting is looking to cover a few topics around Day End to hopefully make things a bit clearer.

Over the years we have also changed the way Day End operates and added additional options to let you choose when things happen. Prior to version 5.2A, all the functions mentioned above had to be done during Day End and there were no alternatives. People became imaginative and ran macros to run Day End on a frequent basis. Why were people doing this? The main reason was that for many businesses, updating the costing in inventory only once a day is not sufficient, if costs are changing quickly you want this reflected right away. Another reason is that if you operate your business 24×7 then you don’t have an after-hours time period when you can run this. Plus for some people Day End was taking longer than the overnight period to complete.

In version 5.2A we introduced the feature of “Day End at Posting Time”. With this mode essentially whenever you posted a document in I/C, O/E or P/O, we would run Day End. Then you never had to run the original stand-alone Day End screen and you could operate 24×7 without running a separate Day End process and your I/C Costing was always up-to-date. This worked fine for some people (usually people with lower volume), but it caused problems for others. One was that it slowed down posting time of documents too much and impeded the productivity of people posting Orders and such. Second, when you have longer transactions, you now run a larger risk of multi-user conflicts (which are really quite annoying). Third, this resulted in a large number of G/L, A/R and A/P batches being produced. The usual workaround for people that really need this was to turn off other features that slow down posting such as “Keep Statistics” or “Keep History”. You can speed up posting quite a bit by turning off various options in the various Accounting Module’s Options screens. However you then lose use of these features and often which you do can be a difficult trade-off.

In version 5.5A we introduced the feature of “Costing during Posting”. Here when you post an I/C, O/E or P/O document we would run the Costing part of Day End, but not all the other parts. This turned out to be a good compromise. It didn’t noticeably slow down document posting and hence didn’t introduce more multi-user conflicts. So people could now keep their Costing data up-to-date without frequently running Day End. However you still need to run the Day End processing function at night to create all the sub-ledger documents, create audit history and other miscellaneous functions.

Now let’s go through each of the Day End functions in a bit more detail.

Cost Transactions

This is based on the Item Costing Method and is when the costing buckets are updated for the affected items in I/C. Basically depending on whether we are buying or selling:

  • Incoming Transactions: Increases Total Quantity/Cost in Location Details/Costing Buckets
  • Outgoing Transactions: Calculates/Removes Quantity/Cost from Location Details/Costing Buckets

Create Audit Information

Day End is responsible for populating all the various I/C, O/E and P/O audit history tables including:

  • Posting Journals (all transactions)
  • Item Valuation (all transactions)
  • IC Transaction History (all transactions)
  • IC Statistics (all transactions)
  • IC Serial Numbers Audit (IC and OE Shipments)
  • IC Sales Statistics (IC and OE Shipments)
  • OE Sales History (OE transactions)
  • OE Sales Statistics (OE transactions)
  • OE Commissions Audit (OE Invoices/Credit Notes/Debit Notes)
  • PO Purchase History (PO transactions)
  • PO Payables Clearing Audit (PO transactions)

Generate GL/AR/AP/PM/FA Entries

Create all the various batches in the sub-ledgers. These include:

  • GL Entries (all transactions)
  • AR Entries (OE Invoices, Credit Notes, and Debit Notes)
  • AP Entries (PO Invoices, Credit Notes, and Debit Notes)
  • PM Entries (IC Shipments, OE Shipments, PO Purchase Orders / Receipts / Invoices / Returns / Credit Notes / Debit Notes)
  • FA Entries (IC Internal Usages, PO Receipts)

Miscellaneous Functions

Then there are a collection of miscellaneous functions that include:

  • Activates future orders (OE)
  • Deletes expired quotes that have not been activated (OE)
  • Deletes completed orders if “Keep History” is OFF (OE)
  • Activates future purchase orders (PO)
  • Clears completed transactions if “Keep History” is OFF (PO)

Day End Processing Structure

Day End Processing (DEP) is structured as follows:

Note that DEP doesn’t process all transactions in chronological order.

Summary

Your best bet is to use “Costing during Posting”. This will give you real-time costing without badly affecting performance. As ERP packages address larger organizations there tend to be more and more of these types of operations. The more people doing data entry, the less you want them burdening the application and database servers to maximize productivity. Many tier one ERP packages split this into more parts, the advantage of this is that several (that aren’t adjacent) can be run at once without causing multi-user conflicts. Sage ERP Accpac has always been under pressure to combine things down into all-in-one operations that work well for smaller businesses, however if we are to move the operations suite into larger Enterprises then we will have to slice these processes up finer.

Written by smist08

April 2, 2011 at 4:21 pm

Follow

Get every new post delivered to your Inbox.

Join 230 other followers