Stephen Smith's Blog

All things Sage ERP…

Posts Tagged ‘sage 300

Sage Intelligence Go!

with 4 comments

Introduction

A new flavor of Sage Intelligence was demonstrated by Himanshu Palsule during his keynote at Sage Summit called Sage Intelligence Go! (SIG). I thought I’d spend a bit of time this week providing a few more details about what it is and where it fits into the scheme of things.

Sage Intelligence is a business intelligence/reporting tool used by many Sage ERP products. For those who have been around the Sage 300 ERP community, they will recognize it as Alchemex which Sage purchased a few years ago. This is a product that runs as a Microsoft Excel add-in which extracts data from the ERP to be manipulated by the power of the Excel spreadsheet engine. This is a very popular way of doing Financial Reporting since you often want to present the data graphically or you want to perform complex calculations or you want to slice and dice the data using pivot tables. Excel provides a great platform for performing these tasks. The original Financial Reporter bundled with Sage 300 ERP works in a similar manner as an Excel Add-in. Sage Intelligence is used for quite a few things beyond Financial Accounting including Sales Analysis and other predictive type BI functions.

Sage Intelligence has been around for a while and is a good mature product. Since it is written with the full Excel add-in SDK, it must run in Excel and specifically the locally installed version of Excel. This means it isn’t particularly well suited for cloud applications such as Office Online.

Microsoft has now released a newer SDK for developing Office apps. This new SDK is designed to allow you to develop applications that can still run in the locally installed full Microsoft Excel, but they can also run inside the cloud/web based Excel Online as well as the new specialty versions of Office like the one for the iPad.

As Sage ERP applications move to the cloud, you would want to have your Financial Reporter be cloud based as well. You would want to be able to edit Financial Reports in the browser as well as display and interact with reports on any device including your tablet or smart phone. This is where Sage Intelligence Go! comes in. It is written in the new Office Apps SDK and will run on the online and device version s of Excel as well as the full locally installed Excel. This you don’t need a Windows PC at all to use your cloud based ERP and cloud based Financial Reporter. However you still have all the power of Excel helping you visualize and manipulate your reports.

sig1

Sage One Accounting

Sage One Accounting by Sage Pastel is such a cloud based ERP system. This product has Sage Intelligence Go! as an option for performing various reporting needs. When SIG started out here, it was much simpler. The original Office Apps SDK was very simple and didn’t nearly have the power of the various older SDKs supported by local Excel. However as time as gone by, the Office App SDK has become much stronger and the functionality is now much more in line with what we expect of such a solution.

Office Apps

For those of you who attended Sage Summit and saw James Whittaker’s keynote, they would have seen the importance Microsoft is placing on the new Office Apps. Basically to switch from using the browser and search, or using Apps from a mobile device App store, you will get your data directly in your business applications via this new sort of App. If you go to the Office App Store, and search on Sage you will find a Sage One app for Outlook and Sage Intelligence Go! If you search a bit more broadly, you won’t find that many non-trivial applications. Sage Intelligence Go! is probably the most sophisticated Office App in the store.

Notice that you don’t run SIG reports from the ERP application, they run from Excel. Once you have the XLSX file, you can just run it anytime from any version of Excel and have full access to your data. This is really a new paradigm for doing reporting. For SIG this works really well. Whether other applications fit this model is yet to be seen.

Web Services

SIG needs to communicate both with the Office API (whether cloud or local) as well as with the ERP to get the data to process. Both of these functions are accomplished via RESTful web services. The communication with the ERP must be via Web Services since these could originate from the SIG Office App running in the Microsoft Office Online Cloud or from the Office App running in the local Excel. It isn’t possible to use the traditional methods of database integration like ODBC when the two may be running in completely different locations.

Basically the Sage Cloud based ERP exposes a standard set of Web Services that are protected by Sage ID. When the user starts SIG, they get a Sage ID login prompt from with-in Excel and then this is transmitted through to the Sage Cloud ERP which uses the Sage ID to look up which tenants this user runs as. This information is related back to SIG in case it needs a second prompt to act which tenant (company) to access.

Sage Intelligence Go consists of a server component that runs in the cloud as well as the Office App that runs in Excel. The Server portion of SIG uses the provided Web Services to load the data to be reported on from the ERP into its in-memory database for processing. This in-memory database is maintained in the cloud where the SIG service runs. Then the Office App part of SIG interacts with the server side to present the correct required data and perform various processing functions.

sig2

ERP Integration

This solution requires the ERP provide Web Services that are exposed to the Internet in general, so that it can be access via Excel running anywhere whether installed locally, running on an iPad or running as a Web Application in the cloud. This means these Web Services need to be secure and available 24×7 with very good availability. For this reason you will see SIG first integrating to Sage Cloud based ERPs (like the current Sage One Accounting by Sage Pastel) and later via the Sage Data Cloud which will offer these services on behalf of on-premised installed ERP systems.

Summary

Sage Intelligence Go! is our solution for cloud reporting. Its primary purpose is to provide Financial Reporting capabilities, but it is also capable of handling other BI type reporting needs. This is our solution to moving a lot of reporting functions into the cloud.

sig3

 

 

Sage Summit 2014

with 3 comments

Introduction

I’m just back from Sage Summit 2014 which was held at the Mandalay Bay Resort/Hotel in Las Vegas, Nevada. There were over 5200 attendees at the show, a new record for Sage. The Mandalay Bay is a huge complex and I racked up a record number of steps for GCC getting from one place to another. Las Vegas is easy to get to for most people since there are a lot of direct flights from around North America and you can find really cheap hotel accommodation near to the conference (like $29 at the Excalibur which is connected to Mandalay Bay by a free tram). The only down side to having he conference in Vegas is that smoking is still allowed in many public places, which is really annoying.

The conference had a great many guest speakers including quite a few celebrities like Magic Johnson and Jessica Alba. The convention trade show wasn’t just booths, there were also open speaking theatres that always had something interesting going on as well as the Sage Innovation Lab Exhibit.

There were a great many product breakout sessions as well as a large number of breakout sessions on general business and technology topics. The intent was to make Sage Summit a place to come to for a lot more than just learning new technical details about your Sage product, or promoting new add-ons for you to purchase. A lot of customers attending the show told me that it was these general sessions on accounting, marketing and technology that they found the most useful.

The show was huge and this blog post just covers a few areas that I was directly involved in or attended.

Great General Sessions

Besides the mandatory Sage keynotes, there were quite a few general sessions which were quite amazing. My favorite was Brad Smith’s interview with Biz Stone the co-founder of Twitter and Jelly. Biz certainly provides a lot of interesting context to Web startups, as well as a lot of interesting stories of why he left Google and chose the path he chose. It was certainly interesting in the way a lot of the successful founders left very secure lucrative careers to really struggle for years to get their dreams off the ground. A common theme was the need for persistence so you could survive long enough to eventually get that big break. Another common theme was to follow people and ideas rather than companies and money. Now I’m going to have to read Biz’s book: “Things a Little Bird Told Me: Confessions of the Creative Mind”.

bizstone

Another very popular session was the panel discussion with Magic Johnson, CEO of Magic Johnson Enterprises, Jessica Alba, co-founder of the Honest Company and J. Carrey Smith, CEO of Big Ass Solutions. This discussion concentrated on their current businesses and didn’t delve into their celebrity pasts for which at least two panelists are rather well known for. There were a lot of good business tips given and it was interesting to see how Magic Johnson and Jessica Alba have adapted what they did before to becoming quite successful CEOs.

magicalba

Sage’s Technology Vision

A lot of Sage’s technology and product presentations were about our mobile and cloud technology vision. The theme was to aggressively move into these areas with purposeful innovation that still protect the investment that our customers have in our current technologies. At the heart of this vision is the Sage Data Cloud. This acts as a hub which mobile solutions can connect to as well as a way that data can be accessed in our existing products whether in the cloud or installed on premise. Below is the architectural block diagram showing the main components of this.

sagetech2

This is perhaps a bit theoretical, but we already have products in the market that are filling in key components of this vision. Some of these are included in the next diagram.

sagetech1

We use the term “hybrid cloud” quite a bit, this indicates that you can have some of your data on premise and some of your data in the cloud. There are quite a few use cases that people desire. Not everyone is sold with trusting all their data to a cloud vendor for safe keeping. In some industries and countries there are tight regulatory controls on where your data can legally be located. The Hybrid Cloud box in the diagram includes Sage 50 ERP (US and Canadian), Sage 100 ERP and Sage 300 ERP.

To effectively operate mobile and web solutions, you do need to have your data available 24×7 with a very high degree of uptime and a very high degree of security. Most small or mid-sized business customers cannot afford sufficient IT resources to maintain this for their own data center. One solution to this problem is to synchronize a subset of your on premise ERP/CRM data to the Sage Data Cloud and then have your mobile solutions accessing this. Then it becomes Sage’s responsibility to maintain the uptime, 24×7 support and apply all the necessary security procedures to keep the data safe.

Another attraction for ISVs is integrate their product to the Sage Data Cloud and then let the Sage Data Cloud handle all the details of integrating to the many Sage ERP products. This way they only need to write one integration rather than separate integrations for Sage 50 ERP, Sage 100 ERP, Sage 300 ERP, Sage 300 CRE, etc.

We had a lot of coverage of the Sage 300 Online offering which has been live for a while now. This was introduced last Summit and now offers Sage 300 ERP customers the choice of installing on premise or running in the Azure cloud. Running in the cloud saves you having to back up your data, perform updates or maintain servers or operating systems. This way you can just run Sage 300 and let Sage handle the details. Of course you can get a copy of your data anytime you want and even move between on premise and the cloud.

The Sage Innovation Lab

On the trade show we had a special section for the Sage Innovation Lab. Here you could play with Google Glasses, Pebble Watches, 3D Printers and all sorts of neat toys to see some prototypes and experiments that Sage is working on with these. We don’t know if these will all be productized, but it’s cool to get a feel for how the future might begin to look like.

Summary

This really was Sage Summit re-imagined. There were a great many sessions, keynotes and demonstrations on all sorts of topics of interest to businesses. This should be taken even further for next year’s Sage Summit which will be in New Orleans, LA on July 27-30, 2015. Does anyone else remember all those great CA-World’s in New Orleans back in the 90s?

Written by smist08

August 2, 2014 at 8:07 pm

Using the Sage 300 .Net Helper APIs

with 8 comments

Introduction

The main purpose of our .Net API is to access our Business Logic Views which I’ve blogged on in these articles:

An Introduction to the Sage 300 ERP .Net API
Starting to Program the Sage 300 ERP Views in .Net
Composing Views in the Sage 300 ERP .Net API
Using the Sage 300 ERP View Protocols with .Net
Using Browse Filters in the Sage 300 ERP .Net API
Using the Sage 300 .Net API from ASP.Net MVC
Error Reporting in Sage 300 ERP
Sage 300 ERP Metadata
Sage 300 ERP Optional Fields

However there are a number of simple things that you need to do repeatedly which would be a bit of a pain to use the Views every time to do these. So in our .Net API we provide a number of APIs that give you efficient quick mechanisms to access things like company, fiscal calendar and currency information.

With Sage Summit 2014 just a few weeks away (you can still register here), I can’t pre-empt any of the big announcements here in my blog (as much as I’d like to), so perhaps a bit of an easier .Net article instead. For many these examples are fairly simple, but I’m always getting requests for source code, and I happen to have a test program that exercises these APIs that I can provide as an examples. This program was to help ensure and debug these APIs for our 64 Bit/Unicode version which might indicate why it tends to print rather a strange selection of fields from some of the classes.

Sample Program

The sample program for this article is a simple WinForms application that uses the Sage 300 ERP API to get various information from these helper classes and then populates a multi-line edit control with the information gathered. The code is the dotnetsample (folder or zip) located in the folder on Google Drive at this URL. The code is hard coded to access SAMLTD with ADMIN/ADMIN as the user id and password. You may need to change this in the session.open to match what you have installed/configured on your local system. I’ve been building and running this using Visual Studio 2013 with the latest SPs and the latest .Net.

dotnetsample

The Session Class

The session class is the starting point for everything else. Besides opening the session get establishing the DBLink, you can use this class to get some useful version information as well as some information about the user like their language.

The DBLink Class

From the session you get a DBLink object that is then your connection to the database and everything in it. From this object you can open any of our Business Logic Views and do any processing that you like. Similarly you can also get quick access to currency and fiscal calendar information from here. Of course you could do much of this by opening various Common Service Views, but this would involve quite a few calls. Additionally the helper APIs provide some caching and calculation support.

The Company Class

Accessing the company property from the DBLink object is your quick shortcut to the various Company options information stored in Common Services in the CS0001 View. This is where you get things like the home currency, number of fiscal periods, whether the company is multi-currency or get address type information. Generally you will need something from here in pretty much anything you do.

The FiscalCalendar Class

You can get a FiscalCalendar object from the FiscalCalendar property of the DBLink. In accounting fiscal periods are very important, since everything is eventually recorded in the General Ledger in a specific fiscal year/fiscal period. G/L mostly doesn’t care about exact dates, but really cares about the fiscal year and period. For accurate accounting you always have to be very careful that you are putting things in the correct fiscal year and periods. In Common Services we setup our financial years and fiscal periods assigning them various starting and ending dates. Corporate fiscal years don’t have to correspond to a calendar year and usually don’t. For instance the Sage fiscal year starts on October 1, and ends on September 30.

This object then gives you methods and properties to get the starting and ending dates for fiscal periods, years or quarters. Further it helps you calculate which fiscal year/period a particular date falls in. Often all these calculations are done for you by the Views, but if you are entering things directly into G/L these can be quite useful. Some of the parameters to these methods are a bit cryptic, so perhaps the sample program will help with anyone having troubles.

The Currency Classes

There are several classes for dealing with currencies, there are the Currency, CurrencyTable and CurrencyRate classes. You get these from the DBLink’s GetCurrency, GetCurrencyTable and GetCurrencyRate methods. There is also a GetCurrencyRateTypeDescription method to get the description for a given Currency Rate Type.

The Currency object contains information for a given currency like the description, number of decimals and decimal separator. Combined with the Currency Rate Type, we have a Currency Table entry for each Currency Code and Currency Rate Type. Then for each of these there are multiple CurrencyRate’s for each Currency on a given date.

So if you want to do some custom currency processing for some reason, then these are very useful objects to use. The sample program for this article has lots of examples of using all of these.

Remember to always test your programs against a multi-currency database. A common bug is to do all your testing against SAMINC and then have your program fail at a customer site who is running multi-currency. Similarly it helps to test with a home currency like Japanese Yen that doesn’t have two decimal places.

Summary

This was just a quick article to talk about some of the useful helper functions in our Sage 300 ERP .Net API that help you access various system data quickly. You can perform any of these functions through the Business Logic Views, but since these are used so frequently, they save a lot of programming time.

Evicting Users from Sage 300 ERP

with 5 comments

Introduction

Generally Sage 300 ERP is used in a multi-user environment where users could be distributed across a large building or located in many different sites. Further Sage 300 ERP uses a concurrent licensing model for users, so if you have 10 Lanpaks then 10 people can login at once; however, it doesn’t matter which ten people it is.

Often companies save a bit of money by buying fewer Lanpaks than users of the product. Perhaps a clerk works the early shift of 7-3 and then when they go home a Financial Accountant runs some Financial Reports. But what happens if that clerk doesn’t sign off? What if they work at home and aren’t answering their phone? Now the Financial Accountant gets a message that all the Lanpaks are in use and can’t get their work done.

Evicting Users

To solve this problem Sage 300 ERP 2014 Product Update 2 will be introducing an Evict Users feature. Previously we provided a detailed list of everyone in the system and what they are doing which I blogged on here.  Now you can also kick them out of the system to recover the Lanpak for someone else to use.

From the Current Users screen there is now a push button to “Sign Out Selected Users”. You then get a dialog with a dire warning and are requested to enter the admin password and confirm to kick out the desired user.

evict

Then in a minute or so, all the screens for that user will be terminated and their Lanpaks will be available for someone else to use.

Technical Details

So how is this accomplished? Basically when you evict a user, that screen will store an encrypted file in the shared data folder. Periodically any Lanpak Managers running will have a look to see if there is a new file and if there is it will see if it is for users they are managing. If so, Lanpak manager will kill the processes of all the screens they are running. The file is left in place for a few minutes, so this particular user won’t be able to sign in again immediately.

This is a fairly simple scheme that is fairly effective for recovering Lanpaks. It works both for regularly run screens from the desktop as well as web deployed screens from the Web Desktop or from Sage CRM.

Since it’s by user, you can kill the ADMIN user which will kill yourself. If all your users sign in as ADMIN then it will kill all the users on the system. So beware that if multiple users share a user id, then they will all be killed and not just one workstations.

Performing Upgrades

Another use case that this method only partially addresses is kicking everyone out of the system so you can perform an upgrade (like a Product Update). Generally a DLL or EXE file in Windows is locked if a running process uses it. Hence you can’t update Sage 300 if someone has a4wapi.dll in use for instance. It could be that this method does gets everyone out of the system, but there are a few cases which may not work:

  • If someone signs off the Sage 300 Desktop but leaves it running. In this case the EXE and DLLs are still in use, but since it isn’t using a Lanpak and isn’t associated with a user, it can’t be killed this way. However I tend to think this is fairly unlikely.
  • It won’t be effective against things that quickly open sessions, do something and then close the session. This would include things like the Sage CRM integration where the custom Sage CRM pages open a session load their data and close the session. However things like this tend to be Web Servers and can usually be stopped remotely or at least from a central place.
  • Things that we allow to use Sage 300 without using a Lanpak. This would include things like parts of the Sage CRM integration, the Sage HRMS integration and the Sage 300 Portal.
  • For third party products, if they are full SDK, it will definitely work on them. If they aren’t full SDK it may or may not work depending on how they are built.

Keep in mind that the main purpose of this feature is to manage Lanpaks, performing upgrades is just a secondary things, where hopefully it helps, but may not be a complete solution.

The Scary Warning

The warning message when you run this is fairly severe. Part of this is because we are killing the processes that are connected to our API. Generally you won’t corrupt the main database because everything is protected by transactioning. So if you do kill the process while they are say posting a batch, it just means the transaction in process will be rolled back and they will have to do it again later. Generally the expectation is that this feature is used once people go home and aren’t doing anything which would be harmless. Further in the user list screen you can see what people are doing, so don’t kill the person running Day End.

However if someone is using the UI and they are resizing columns in a grid, you could catch things at the wrong time and corrupt a *_p.ism file. But these can be deleted and sometimes repaired with ScanIsam. If you are running a non-SDK third party product, I can’t really say what will happen if it’s killed.

Summary

The Evict Users feature is the number one feature as voted on, on our Ideas web site and this is now in the product. So keep making suggestions to https://www11.v1ideas.com/Sage300ERP/Accpac and voting on suggestions that are in the system that you would like to see implemented.

This features will make it easier for companies to manage their Lanpaks and get better value from the system. Hopefully this will also make managing upgrades a bit easier as well.

Written by smist08

May 24, 2014 at 4:17 pm

Disaster Recovery

with 4 comments

Introduction

In a previous blog article I talked about business continuity, what you need to do to keep Sage 300 ERP up and running with little or no downtown. However I mushed together two concepts, namely keeping a service highly available along with having a disaster recovery plan. In this article I want to separate these two concepts apart and consider them separately.

We’ve had to give these two concepts a lot of thought when crafting our Sage 300 Online product offering, since we want to have this service available as close to 100% as possible and then if something truly catastrophic happens, back on its feet as quickly as possible.

Terminology

There is some common terminology which you always see in discussions on this topic:

RPO – Recovery Point Objective: this is the maximum tolerable period in which data might be lost due to a major incident. So for instance if you have to restore from a backup, how long ago was that backup made.

RTO – Recovery Time Objective: this is the duration of time within which a business process must be restored after a disaster (or disruption) in order to avoid unacceptable consequences. For instance if a computer fails, how long can you wait to replace it.

HA – High Availability: usually concerns keeping a system running with little or no downtime. This doesn’t include scheduled downtime and it usually doesn’t include a major disaster like an earthquake eating a datacenter.

DR – Disaster Recovery: this is the process, policies and procedures that are related to preparing for recovery or continuation of technology infrastructure which are vital to an organization after a natural or human-induced disaster.

High Availability

HA means creating a system that can keep running when individual components fail (no single point of failure), like one computer’s motherboard frying, a power supply failing or a hard disk failure. These are reasonably rare events, but often systems in data centers run on dozens of individual computers and things do fail and you don’t want to be down for a day waiting for a new part to be delivered.

Of course if you don’t mind being down for a day or two when things fail, then there is no point spending the money to protect against this. Which is why most businesses set RPO and RTO targets for these type of things.

Some of this comes down to procedures as well. For instance if you have all redundant components but then run Windows Update on them all at once, they will reboot all at once bringing your system down. You could schedule a maintenance windows for this, but generally if you have redundant components you can Windows update the first and when its fine and back up, then you do the secondary.

If you are running Sage ERP on a newer Windows Server and using SQL Server as your database then there are really good hardware/software combinations of all the standard components to give you really good solid high availability. I talked about some of these in this article.

Disaster Recovery

This usually refers to having a tested plan to spin up your IT infrastructure at an alternate site in the case of a major disaster like an earthquake or hurricane wiping out you currently running systems.

natural_disaster

Again depending on your RPO/RTO requirements will depend on how much money you spend on this. For instance do you purchase backup hardware and have it ready to go in an alternate geographic region (far enough away that the same disaster couldn’t take out both locations)?

For sure you need to have complete backups of everything that are stored far away that you can recover from. Then it’s a matter of acquiring the hardware and restoring all your backups. Often people are storing these backups in the cloud these days, this is because cloud storage has become quite inexpensive and most cloud storage solutions provide redundancy across multiple geographies.

The key point here is to test your procedure. If your DR plan isn’t tested then chances are it won’t work when it’s needed. Performing a DR drill is quite time consuming, but really essential if you are serious about business continuity.

Azure

One of the attractions of the cloud is having a lot of these things done for you. Sage 300 Online handles setting up all its systems HA, as well as having a tested DR plan ready to implement. Azure helps by having many data centers in different locations and then having a lot of HA and DR features built into their components (especially the PaaS ones). This then removes a lot of management and procedural headaches from running your business.

Hard Decisions

If a data center is completely wiped out, then the decision to execute the DR plan is easy. However the harder decision comes in when the primary site has been down for a few hours, people are working hard to restore service, but it seems to be dragging on. Then you can have a hard decision to kick in the DR plan or to wait to see if the people recovering the primary can be successful. These sort of things are often caused by electrical problems, or problems with large SANs.

One option is to start spinning up the alternative site, restoring backups if necessary and getting ready, so when you do make the decision you can do the switch over quickly. This way you can often delay the hard decision and give the people fixing the problem a bit more time.

Dangers

Having a good tested DR plan is the first step, but businesses need to realize that if a major disaster like an earthquake wiping out a lot of data centers, then many people are going to activate their DR plans at once. This scenario won’t have been tested. We could easily experience a cascading outage from the high usage that causes many other sites to go down, until the initial wave passes. Generally businesses have to be prepared to not receive good service until everyone is moved over and things settle down again.

Summary

Responsible companies should have solid plans for both high availability and disaster recovery. At the same time they need to compare the cost of these against the time they can afford to be down against the probability of these scenarios happening to them. Due to the costs and complexities of handling these scenarios, many companies are moving to the cloud to offload these concerns to their cloud application provider. Of course when choosing a cloud provider make sure you check the RPO and RTO that they provide.

Written by smist08

April 12, 2014 at 5:56 pm

Unstructured Time at Sage

with 4 comments

Introduction

Unstructured time is becoming a common way to stimulate innovation and creativity in organizations. Basically you give employees a number of hours each week to work on any project they like. They do need to make a proposal and at the end give a demo of working software. The idea is to work on projects that developers feel are important and are passionate about, but perhaps the business in general doesn’t think is worthwhile, too risky or has as a very low priority. Companies like Google and Intuit have been very successful at implementing this and getting quite good results.

dilbert-google-20time

Unstructured Time at Sage

The Sage Construction and Real Estate (CRE) development team at Sage has been using unstructured time for a while now. They have had quite a lot of participation and it has led to products like a time and expense iPhone application. Now we are rolling out unstructured time to other Sage R&D centers including ours, here in Richmond, BC.

At this point we are starting out slowly with 4 hours of unstructured time a sprint (every two weeks). Anyone using this needs to submit a project proposal and then do a demo of working code when they judge it’s advanced enough. The proposals can be pretty much anything vaguely related to business applications.

The goal is for people to work on things they are passionate about. To get a chance to play with new bleeding edge technologies before anyone else. To develop that function, program or feature that they’ve always thought would be great, but the business has always ignored. I’m really looking forward to what the team will come up with.

so-many-toys-so-little-unstructured-time-new-yorker-cartoon

We are still doing Hackathons, Ideajams and our regular innovation process. This is just another initiative to further drive innovation at Sage.

Crazy Projects at Google

Our unstructured time needs to be used for business applications, but I wonder what unstructured time is like at Google where they seem to come up with things that have nothing to do with search or advertising. Is it Google’s unstructured time that leads to self-driving cars, Google Glasses, military robots, human brain simulations or any of their many green projects. Hopefully these get turned into good things and aren’t just Google trying to create SkyNet for real. Maybe we’ll let our unstructured time go crazy as well?

Anathem

I’m a big fan of Neal Stephenson, and recently read his novel Anathem. Neal’s novels can be a bit off-putting since they are typically 1000 pages long, but I really enjoy them. One of the themes in Anathem are monasteries occupied by mathematicians that are divided up into groups by how often they report their results to the outside world. The lower order reports every year, next is a group that reports every ten years, then a group that reports every 100 years and finally the highest group that only reports every 1000 years. These groups don’t interact with anyone outside their order except for the week when they report and exchange information/literature with the outside world. This is in contrast to how we operate today where we are driven by “internet time” and have to produce results quickly and ignore anything that can’t be done quickly.

So imagine you could go away for a year to work on a project, or go away for ten years to work on something. Perhaps going away for 100 years or 1000 years might pose some other problems that the monks in the novel had to solve. The point being is to imagine what you could accomplish if you had that long? Would you use different research approaches and methods than we use typically today? Certainly an intriguing prospect contrasting where we currently need to produce something every few months.

My Project

So why am I talking about Anathem and unstructured time together? Well one problem we have is how do you get started on big projects with lots of risk? Suppose you know we need to do something, but doing it is hard and time consuming? Every journey has to start with the first step, but sometimes making that first step can be quite difficult. I’ve had the luxury of being able to do unstructured time for some time, because I’m a software architect and not embedded in an agile sprint team. So I see technologies that we need to adopt but they are large and won’t be on Product Manager’s road maps.

So I’ve done simple POC’s in the past like producing a mobile app using Argos. But more recently I embarked on producing a 64-Bit version of Sage 300. This worked out quite well and wasn’t too hard to get going. But then I got ambitious and decided to add Unicode into the mix. This is proving more difficult, but is progressing. The difficulty with these projects is that they involve changing a large amount of the existing code base and estimating how much work they are is very difficult. As I get a Unicode G/L going, it becomes easier to estimate, but I couldn’t have taken the first step on the project without using unstructured time.

Part of the problem is that we expect our Agile teams to accurately estimate their work and then rate them on how well they do this (that they are accountable for their estimates). This has the side effect that they are then very resistant to work on things that are open ended or hard to estimate. Generally for innovation to take hold, the performance management system needs a bit of tweaking to encourage innovation and higher risk tasks, rather than only encouraging meeting commitments and making good estimates.

Now unlike Anathem, I’m not going to get 100 years to do this or even 10 years. But 1 year doesn’t seem so bad.

Summary

Now that we are adding unstructured time to our arsenal of innovation initiatives, I have high hopes that we will see all sorts of innovative new products, technologies and services emerge out of the end. Of course we are just starting this process, so it will take a little while for things to get built.

Multi-Threading in Sage 300

with 6 comments

Introduction

In the early days of computing you could only run one program at a time on a PC. This meant if you wanted to run 10 programs at once you needed 10 computers. Then bit by bit multitasking made its way from mainframes and Unix to PCs, which allowed you to run quite a few programs at a time. Doing this meant you could run all 10 programs on one computer and this worked quite well. However it was still quite a high overhead since each program used a lot of memory and switching between them wasn’t all that fast. This lead to the idea of multi-threading where you ran very light weight tasks inside a single program. These used the same memory and resources as the program they were running in, so switching between them was very quick and the resources used adding more threads was very minimal.

Enter the Web

Think about how this affects you if you are building a web server. You want to basically run your programs on the web server and consider if you are running in the cloud. If you were single process then each web user running your app would have a separate VM to handle his requests and he would interact with that VM. There would be a load balancer that routes the users requests to the appropriate VM. This is quite an expensive way to run since you typically pay quite a bit a month for each VM. You might be surprised to learn that there are quite a few web applications that run this way. The reason they do this is for greater security since in this model each user is completely separated from each other since they are really running against separate machines.

The next level is to have the web server start a separate process to handle the requests for a given user. Basically when a new user signs on, a new process is started and all his requests are routed to this process. This model is typically used by applications that don’t want to support multi-threading or have other concerns. Again quite a few web applications run this way, but due to the high resource overhead of each process, you can only run at best a hundred or so users per server. Much better than one per VM, but still for the number of customers companies want to use their web site, this is quite expensive.

The next level of efficiency is to have each new user that signs on, just start a new thread. This is then way less overhead since you use only a small amount of thread local storage and switching between running threads is very quick. Now we are getting into have thousands of active users running off each web server.

tangled_threads_small1

This isn’t the whole story. The next step is to make your application stateless. This means that rather than each user getting their own thread, we put all the threads in a common pool. Then when a request for a user comes in, we just use a free thread from the pool to process the request. This way we don’t keep any state on the server for each user, and we only need the number of threads to be able to handle the number of active requests at a given time. This means while a user is thinking or reading a response, they are using no server resources. This is how you get a web applications like Facebook that can handle billions of users (of course they still use tens of thousands of servers to do this).

These techniques aren’t only done in the operating system software, but modern hardware architectures have been optimized for these techniques as well. Modern server CPUs have multiple cores which are very efficient at running multiple threads in parallel. To really take advantage of the power of these processors you really need to be a multi-threaded application.

Sage 300 ERP

As Sage 300 moves to the cloud, we have the same concerns. We’ve been properly multi-process since our 32-Bit version, back in the version 4 days (the 16-Bit version wasn’t really multi-process because 16-Bit Windows wasn’t properly multi-process).

We laid the foundations for multi-threaded operation in version 5.6A and then fully used it starting with version 6.0A for the Portal and Quote to Orders. Since then we’ve been improving our multi-threading as it is a very foundational component to being able to utilize our Business Logic Views from Web Applications.

If you look at a general text book on multi-threading it looks quite difficult since you are having to be very careful to protect the right memory at the right time. However a lot of times these books are looking at highly efficient parallel algorithms. Whereas we want a thread to handle a specific request for a specific user to completion. We never use multiple threads to handle a single request.

From an API point of view this means each thread has its own .Net session object and its own set of open Sage 300 Business Logic Views. We keep these cached in a pool to be checked out, but we never have more than one thread operating on one of these at a time. This then greatly simplifies how our multi-threading support needs to work.

If you’ve ever programmed our Business Logic Views, they have had the idea of being multi-threaded built into them from day 1. For instance all variables that need to be kept from call to call are stored associated with the view handle. There are no global variables in these Views. Further since even single threaded programs open multiple copies of the Views and use the recursively, a lot of this support has been fully tested since it’s required for these cases as well.

For version 5.6A we had to ensure that our API had thread safe alternatives for every API and that any API that wasn’t thread safe was deprecated. The sort of thing that causes threading problems is if an API function say just returns TRUE or FALSE on whether it succeeds and then if you want to know the real reason you need to check a global variable for the last error return code. The regular C runtime has a number of functions of this nature and we used to do this for our BCD processing. Alternatives to these functions were added to just return the error code. The reason the global variable is bad, is that another thread could call one of these functions and reset this variable in between you getting the failed response and then checking the variable.

State

If you’ve worked with our Views you will know that they are quite state-full. We can operate statelessly for simple operations like basic CRUD operations on simple objects. However for complicated data entry (like Order Entry or Invoice Entry) we do need to keep state while the user interacts with the document. This means we aren’t 100% stateless at this point, but our hope is that as we move forwards we can work to reduce the amount of state we keep, or reduce the number of interactions that require keeping state.

Testing Challenges

Fortunately testing tools are getting better and better. We can test using the Visual Studio Load Tester as well as using JMeter. Using these tools we can uncover various resource leaks, memory problems and deadlocks which occur when multiple threads go wrong. Static code analysis tools and good old fashioned code reviews are very useful in this regard as well.

Summary

As we progress the technology behind Sage 300, we need to make sure it has the foundations to run as a modern web application and our multi-threading support is key to this endeavor.

 

Written by smist08

February 22, 2014 at 5:37 pm

Follow

Get every new post delivered to your Inbox.

Join 262 other followers