Posts Tagged ‘Sage’
Recently we were investigating why any of the Sage 300 ERP Financial Reporter dialogs would crash when launched from within Excel 2013. It turned out that they were running afoul of Window’s Data Execution Prevention (DEP). DEP is a security feature that has been added to newer operating systems, basically to stop malware programs from figuring out a way download code into a data area and then somehow causing it to execute, usually by overwriting the stack by taking advantage of a memory overrun bug.
OK but Sage 300 ERP would certainly never try to do anything like that, so why would it crash with this sort of exception?
The Sage 300 ERP VB screens are built out of a number of ActiveX controls that provide data binding from Sage 300 Business Objects to the UI elements, so that we don’t have to write any code for most data fields, we just need to wire them up in the screen editor.
When we created these controls as part of creating version 5.0A, there were a number of ways of doing this and the one we chose was Microsoft’s Active Template Library (ATL) where you wrote the controls in C++ in an object oriented manner. And it turns out that ATL puts code into the data segment and then executes it.
So why does ATL do this? The basic problem with object oriented frameworks on Windows is that the core Windows kernel is not object oriented. Basically Windows sends a notification for a Window where the Window is specified by its Windows handle. So how do you know which Window object in your framework should get this notification message? Microsoft’s MFC framework solved this problem by keeping a table of Windows handles to Windows objects, and then when each message comes in, it looks up which object it’s for and then calls that object. This then gave MFC a reputation for being slow, since there are a lot of such messages and MFC then spends all its time looking up objects. But on the good side this is quite a safe and sure method of doing things and has never broken. ATL decided to get tricky. For each Window you can add a custom 32 bit value, so ATL made this a memory pointer to the object code for the object to call. Then when the message comes in ATL would create data for an assembler jump instruction and append this 32 bit address and then pass control to the jump instruction to call the object. Notice that this is done very quickly with no table lookup. But it does mean building a bit of code in data memory and then executing it. Generally this is referred to as “thunking”.
So basically ATL (and early versions of the .Net framework) are executing a design pattern utilized by modern viruses. This is a very clever and fast way to do things, but unfortunately needed to be blocked.
Newer versions of ATL (version 8 and above) now allocate a small block of memory from the operating system with the correct security attributes so that they can still do the same trick, but now the program has let Windows know that this is desired and correct behavior.
Current versions of Sage 300 ERP have their controls compiled using ATL 3.0 which came with the Visual C++ 98 compiler. The correct way to fix the problem is to compile with a later version of the compiler namely we chose Visual Studio 2005 because most other things in our system are compiled with this and it uses ATL 8.1 which then works fine with DEP.
Sound simple. But there are twenty controls or so in the system and there are quite a few differences introduced with newer versions of the C/C++ compiler and with ATL. Generally moving to these newer versions is a good thing, but it introduced a few problems and we needed to ensure the system still worked correctly.
One good thing is that the newer C/C++ compiler has better warnings for detecting things like variables used before they are assigned, bad conversions and mismatched pointers. The compiler detected a few of these and they needed to be fixed. Generally this is a good thing since it makes the overall program more stable and reliable.
Another things with the newer ATL is that it fixed a few bugs in the older ATL. For instance the older ATL didn’t set the background color of controls in all cases, so suddenly if a background color was set and wrong then it would show up, so a few UIs needed to be fixed to set background colors correctly. Generally these are good things, but take a bit of work to correct. They also help with another project we have going to modernize the look of all our UIs.
Then we just have to make sure that our normally supported features like translation to double byte character languages, keyboard shortcuts, design time dialogs and such all still work as expected. This is a bit of a challenge with controls like the field edit control which have a lot of modes of operation.
There is always a lot of debate when we change the build to use a new version of the compiler. Will older programs still work? Will customers with older hardware still work? Is it worth the work and risk in changing things rather than sticking with the trusted and true?
I take the view that we have to allocate time in our releases to address technical debt in our releases. We need to upgrade various compilers, frameworks and bundled libraries. Otherwise we start having problems with newer versions of Windows, with newer hardware and generally operating in modern environments. I think we need to take advantage of bug fixes, security fixes and performance fixes in the tools we are using.
Visual Studio 2012
Once we figured this out, we realized this explained why some ISVs were having trouble integrating to our system from Visual Studio 2012. DEP is now turned on by default for all new projects, which means you will GPF if you use any of our ActiveX visual controls. We then confirmed this was the problem. So when this fix is GA, it should also simplify integration work for our ISVs using modern tools. In the meantime you can set /NXCOMPAT:NO in your project to turn off DEP for your program. Obviously this isn’t ideal, but it is a workaround.
Usually in Windows DEP is only turned on for Windows system processes, but Windows can be configured to turn it on for all processes. However individual programs can be configured for having DEP on or off when they are built. How the program is built will take precedence over the Windows settings. This is why we ran into problems with Excel 2013, since it is compiled with DEP turned on. However Office 2013 is also a development platform, so turning on DEP for Office, also means anything integrated into Office has to be DEP compliant as well. This then eliminates using anything built with older versions of ATL and the .Net framework.
When Will This Be Fixed?
We have fixed this for our upcoming Sage 300 ERP 2014 release (which will be released in 2013). We are currently testing as part of that project, but once we are confident we’ve fixed any minor glitches that are still present then we’ll bundle these updated controls together as a hotfix for Sage 300 ERP 2012.
Finding and solving the problem with our Financial Reporter and Excel 2013, was a bit of a relief since it also explained a number of other problems that had been hanging around unsolved. It’s good to figure out when something has gone wrong and to fix it. It’s also good to know why some developers were having trouble integrating to Sage 300 ERP from VS2012.
In investigating some performance problems being reported on some systems running Sage 300 ERP, it lead down the road to investigating Windows Bit-Rot. Generally Bit-Rot refers to the general degradation of a system over time. Windows has a very bad reputation for Bit-Rot, but what is it? And what can we do about it? Some people go so far as to reformat their hard disk and re-install the operating system every year as a rather severe answer to Bit-Rot.
Windows Bit-Rot is the tendency for a Windows system to get slower and slower over time. Becoming slower to boot, taking longer to log-in, and taking longer to start programs. Along with other symptoms like excessive and continuous hard disk activity when nothing is running.
This blog posting is going to look at a few things that I’ve run into as well as some other background from around the web.
I needed to investigate why on some systems printing Crystal reports was quite slow. This involved software we have written as well as a lot of software from third parties. On my laptop Crystal would print quite slowly the first time and then would print quickly on subsequent times. My computer is used for development and is full of development tools, so the things I found here, might be relevant to myself more than real customers. So how to see what is going on? A really useful program for seeing what is going on is Process Monitor (procmon) from Microsoft (from their SysInternals acquisition). This program will show you every access of the registry, the file system and the network. You can filter the display, in particular you can filter to monitor only a single program to see what it’s doing.
ProcMon yielded some very interesting results.
My first surprise was to see that every entry in HKEY_CLASSES_ROOT was read. On my computer which has had many pieces of software installed, including several versions of Visual Studio, several versions of Crystal Reports and several versions of Sage 300 ERP, the number of classes registered here was huge. OK, but did it take much time? Well the first time something that’s run that does this it seems to take several seconds, then after this its fast probably because the registry ends up cached in memory. It appears that several .Net programs I tried do this. Not sure why, perhaps just .Net wants to know all the classes in the system.
But this does mean that as your system gets older and you install more and more programs (after all why bother un-installing when you have a multi-terabyte hard drive?), starting these programs will get slightly slower and slower. So to me this counts as Bit-Rot.
So what can we do about this? Un-installing unused programs should help, especially if they use a lot of COM classes. Visual Studio being the big one on my system, followed by Crystal and Sage 300. This helps a bit. But there are still a lot of classes there.
Generally I think uninstall programs leave a lots of bits and pieces in the registry. So what to do? Fortunately this is a good stomping ground for utility programs. Microsoft used to have RegClean.exe, Microsoft discontinued support for this program, but you can still find it around the web. A newer and better utility is Ccleaner from Piriform. Fortunately the free version includes a registry cleaner. I ran RegClean.exe first which helped a bit, but then ran Ccleaner and it found quite a bit more to clean up.
Of course there is danger in cleaning your registry, so it’s a use at your own risk type thing (backing up the registry first is a good bet).
At the end of the day all this reduced the first time startup time of a number of program by about 10 seconds.
My second surprise was the number of calls to check Windows Group Policy settings. Group Policy is a rather ad-hoc mechanism added to Windows to allow administrators to control networked computers on their domain. Each group policy is stored in a registry key, and when Windows goes to do an operation controlled by group policy, it reads that registry key to see what it should do. I was surprised at the amount of registry activity that goes on reading and checking group policy settings. Besides annoying users by restricting what they can do on their computer, it appears group policy causes a general high overhead of excessive registry reading in almost every aspect of Windows operation. There is nothing you can do about this, but it appears as Windows goes from version to version, that more and more gets added to this and the overhead gets higher and higher.
You may not think that you install that many programs on your computer, so you shouldn’t have these sort of problems but remember many programs including Windows/Microsoft Update, Adobe Updater and such are regularly installing new programs on your computer. Chances are these programs are leaving behind unused bits of older versions that are cluttering up your file system and your registry.
Related to auto-updates, it appears that so many programs now run as icons in the task bar, install Windows services or install programs to run when you log-in. All of these slow down the time it takes you to boot Windows and to sign-in. Further many of these programs, say like Dropbox, will keep frequently polling their server to see if there are any updates. Microsoft has a good tool Autoruns for Windows which helps you see all the things that are automatically run and help you remove them. Again this can be a bit dangerous as some of them are necessary (perhaps like a trackpad utility).
Similarly it seems that everyone and their mother wants to install browser toolbars. Each one of these will slow down the startup of your browser and use up memory and possibly keep polling a server. Removing/disabling these isn’t hard, but it is a nuisance to have to keep doing this.
Hard Disk Fragmentation
Another common problem is hard drive fragmentation. As your system operates the hard disk becomes more and more fragmented. Windows has a de-frag program that is either scheduled to run when your computer is turned off or you never bother to run it by hand. It is worth de-fragging your hard drive from time to time to speed up access. There are third party de-frag programs, but generally I just use the one that comes built into Windows.
Related to the above problems, often un-installation programs leave odds and ends files around and sometimes it’s worth going into explorer (or a cmd prompt) and deleting folders for un-installed programs. Generally it reduces clutter and speeds up operations like reading all the folders under program files.
Dying Hard Drives
Another common cause of slowness is that as hard drives age, rather than just out right failing, often they will start having to retry reading sectors more. Windows can mark sectors bad and move things around. Hard drives seem to be able to limp along for a while this way before completely failing. I tend to think that if you hear your hard drive resetting itself fairly often then you should replace it. Or when you defrag if you see the number of bad sectors growing, then replace it.
After going through this, I wonder if the people that just reformat their hard drive each year have the right idea? Does the time spent un-installing, registry cleaning, de-fragging just add up to too much? Are you better off just starting clean each year and not worrying about all these maintenance tasks? Especially now that it seems like we replace our computers far less frequently, is Bit-Rot becoming a much worse problem?
Sage 300 ERP has a fairly flexible mechanism for setting up your General Ledger Chart of Accounts. This is a fairly important activity since it controls how you will be able to run financial reports and dice and slice your financial information. I’m not an Accountant so I might miss some of the finer points of accounting, and it’s always important to follow generally accepted accounting principles as much as possible. In some industries and in some countries your chart of accounts is specified for you. For instance Chine specifies the chart of accounts that companies must use.
So the actual chart of accounts can have some fairly hard constraints on how it’s set up. Fortunately there are some other mechanisms like account groups, account rollup, optional fields and transactional optional fields that can be used to enhance reporting capabilities.
I blogged on the general structure of G/L here. This blog posting is going to look a bit more in depth into the structure of G/L Accounts and how to generate some fairly flexible reports.
Each G/L Account can be up to 45 characters in length (formatted). It can consist of up to ten segments each of which can be up to 45 characters. One segment must be designated as the Account segment. Each combination of segments is called an Account Structure.
The account segments are defined in G/L Options on the segments tab and are stored in GLABK (GL0022). These are the building blocks for the G/L Accounts and must be defined first. In Options UI you also specify the segment separator character and which segment is the Account Segment. Next you would define the Account Structures in the G/L Account Structures setup UI and stored in GLABRX (GL0023), these specify the various combinations of account segments that you will be using. You also specify which is the default structure code. The Account Segment isn’t validated and can be any value. The other segment can only have specific values and you specify these in the G/L Segment Codes setup UI and stored in GLASV (GL0021). Then you would define the Account Groups you desire in the G/L Account Groups setup UI.
With these values setup, you can now enter your Chart of Accounts into the Accounts UI in the G/L Accounts folder. The Accounts are stored in GLAMF (GL0001).
If you are following generally accepted accounting principles you should have an idea of how you want your G/L Accounts structured and you should have an idea of how you want your Financial Reports to be structured which then dictates your Account Groups. The Account Groups specify the normal F/R reporting categories like “Cash and Cash Equivalents”, “Accumulated Depreciation” or “Provision for Income Taxes”.
Now that you have a structure, creating all these accounts one by one sounds rather tedious. Fortunately there is G/L Create Accounts function that will create all your Accounts en masse. Our Chart of Accounts isn’t a sparse system, meaning that you do need to create an Account before you use it, so this is a very useful tool.
There are two types of optional fields associated with G/L accounts, one are optional fields that are associated with the actual Account. These are typically used for reports where you are selecting various G/L Accounts to report on and you can then use these optional fields to control which Accounts. For instance the Chart of Accounts report or the Trial Balance report can print Accounts based on the values of these optional fields.
The other sort of optional fields associated with G/L Accounts are transaction detail optional fields. These control information that is flowing from the other ledgers, like A/R Invoices and will store the values for these fields with the transaction details. You can include these in reports like the Transaction or Batch listing reports.
Ultimately the final output of your ERP system are the Financial Reports. Generally CFOs want to look at their Financial Reports from all sorts of angles and all sorts of categorizations. Sage 300 ERP is quite efficient at Financial Reporting since it stores special Financial Set records when it posts batches that keep a lot of data all pre-calculated and easy to access. These are all stored in the GLAFS table (GL0003).
The main Financial Report UI lets you choose all sorts of range criteria and sort orders. These are very useful for getting out your financial statements, but within the Financial Statement specifications there are very powerful data inquiry functions and you can use the full power of Excel to manipulate the data, create charts, create pivot tables, show geographic distributions, etc.
The big drivers of Financial Reports are the Accounts, the Account Groups and the Account segments. However all the functions that you place in your financial reports like FRACCT, FRAMT, FRPOST take filters that can include all sorts of criteria on the fields in the G/L Account record along with optional fields that are specified like A.ACCTCLASS = ʺSalesʺ for Account optional fields and T.QUANTITY <= 0 for transactional optional fields. Keep in mind that account optional fields would be used for filtering accounts and transactional optional fields for filtering transactional amounts (like in FRPOST).
Usually you use the account groups to generate the main F/R statements by the usual accounting categories and then you restrict the account segments to get say departmental or geographic reports. Then you use optional fields to get more esoteric views of your financial data.
To some degree the power of the Financial Reporter depends on your having setup your Chart of Accounts properly in the first place.
But suppose you’ve been running for a while and realize you didn’t setup up things ideally for your financial reporting needs? You’ve now got lots of transactions posted for G/L accounts which you feel are in the wrong structure? Now what do you do? The answer is the GL Account Code Changer module that is included with G/L (you need to activate this separately from G/L). This module will do a search and replace on the database and change all instances of a G/L account from one thing to another. This way all your committed transactions and financial data will be set to the new account and your financial reports can now be printed off the more ideal new structure. The Account number changer can also change the account segment separator and the segments.
Sage 300 ERP has very rich and flexible features allowing companies to create Charts of Accounts that lead to very powerful reporting capabilities. A good knowledge of the whole process is important to design that perfect Chart of Accounts, but if you make a mistake you can always use the Account Number Changer to fix it.
Modern ERP systems maintain a company’s full financial history for many years. People want to be confident that all that data is correct and makes sense. So how can you be confident that your database has full referential integrity? Especially after years and years of operation. The Sage 300 ERP Data Integrity function is a way to validate the integrity of a database. Modern computers are much more reliable these days than when our Data Integrity function was originally written, but it still serves a good purpose. In this article we will explore some of the protections to protect data integrity in Sage 300 along with some of the possible causes of corruption.
The number one protection of data integrity in Sage 300 is database transactioning. Data is always written to the database in a database transaction. Database transactions always take the database from one state with full data integrity to the next state with data integrity. A database transaction is guaranteed by the database server to either be all written to the physical database or none of it is written, the database server guarantees that you will never see part of a transaction.
For instance as we post a G/L batch, we post each entry as a database transaction and since each entry in a G/L batch must be balanced we guarantee via database transactioning that the G/L is always in balance and hence data integrity is maintained.
Where Do Integrity Errors Come From?
Database transactioning sounds great, and in fact with database transactioning we see very few database problems in Sage 300 ERP. But when we do get integrity problems where do data integrity errors come from?
Below is a list of some of the main causes of data integrity problems. I’m sure there are more. I’m not looking to blame anyone (including myself), just to point out the main causes I’ve seen:
- Bugs in the program. If Sage 300 asks SQL Server to store incorrect data, it will do so in a completely reliable transactional manner. Hopefully our QA processes catch most of these and this doesn’t happen often; but, Sage 300 is a large complicated program and mistakes happen.
- People editing database tables directly in SQL Server Enterprise Manager. For various reasons people might try to put something in the database that the program doesn’t allow, and often this leads to database corruption.
- Third party programs that write to the Sage 300 database directly. We do a lot of data validation checking in our business logic before allowing data to be written to the database, but if this is bypassed then corruption occurs. A common one in this case is not handling currency decimal places correctly.
- Data validation needs to be tightened. Now and again, someone has written data that we accepted as valid that wasn’t. Then we had to tighten our data validation routines. The good news here is that we’ve been doing this for a long time now.
- Bug in the Database Server. We’ve seen database indexes get corrupted which can lead to further problems either after the indexes are fixed (because of other data written as a result).
- Partial backups or restores. We’ve seen people back up the tables for each application independently and then restore them. Perhaps to try to put A/R back to yesterday. But this corrupts the database since there is often matching data that needs to be in sync in Bank, Taxes or perhaps Order Entry. Make sure you always backup and restore the database as a whole.
- Hardware glitches. Even with CRC checking and such, strange errors can start to appear from hard disk or memory hardware failures in computers.
The Data Integrity Checker
To find these sort of problems Sage 300 ERP has a data integrity checker in its Administrative Services. The main screen looks like:
You select the applications you want to check and whether you want to fix any minor errors. Since this can be a long process, for several applications you can also configure which parts within the application to check by selecting the application and choosing the application options in a screen like:
The end result is a report is run that lists all the errors found.
What Does the Integrity Checker Look For?
So what does the Integrity Checker do? Below is a list of some of the checks that are typically made:
- Check the integrity of each file reading each record and calling the View Verify API which will call the business logic to validate the record. This includes things like checking the decimals of a money amount are correct, that the data is correct for the data type, that foreign keys are valid.
- For Header/Detail type relationships there are often total or summary fields in the header like the total amount of an order or the number of detail lines. The integrity checker will read through the details and add up any of these numbers to ensure they match the header.
- Check the database for any detail records that don’t have a matching header record (orphans).
- Each application then knows about all sorts of cross file relationships that must be maintained and the Integrity Checker for that application will validate all of these relationships.
What Does Fix Minor Errors Do?
There is the check box to fix minor errors, but what does it do? Mostly it fixes up header/detail relationships by fixing any total or summary fields in header records. It can also delete orphan detail records. But generally it doesn’t attempt much because we don’t want to risk making things worse.
But it’s Slow
The big complaint about the Data Integrity checker is that it’s slow. This is because it does go through every record in the database as well as checking all the cross dependencies. These days we see company databases that are hundreds of gigabytes in size. Generally the complaint is that you can’t just run it as routine maintenance overnight. That you tend to have to configure what you want to run and do that selectively. It’s also best to run it when people aren’t in the system since it does put a fair bit of load on the system.
Even with super reliable modern databases and hardware, data integrity errors can still creep in and need to be dealt with. Just being aware they exist is half the battle. Also remember that it is extremely important to have regular full backups of your data in case of a really catastrophic failure.
As we continue to move Sage 300 ERP to the Azure Cloud, one question that gets asked is whether someone just running G/L, A/P and A/R (the Glapar which rhymes with clapper) is going to be negatively affected by the presence of say I/C, O/E and P/O? Fortunately, Sage 300 ERP activates each module independently and unless an accounting module is activated in the database, you don’t see it at all, it’s just as if you hadn’t installed it.
With per user pricing we’ve tended to bundle quite large number of modules under our various pricing schemes. However if you get such a bundle and then activate everything you have, you could enable quite a few fields and icons that clutter things up, which is a nuisance if you never use them. Generally business flows better if you only see icons and fields that you actually use. Why keep seeing currency rate fields when you never select a different currency? Why see selections for things like lots and serial numbers when you don’t use these? Why see project and job costing icons when you don’t use this module?
Using security and the built in form customization abilities can be used to hide complexity as well. However if the feature is enabled, it usually implies that someone in your organization is going to have to deal with it. So consider these in addition to setting up security and setting up customizations for your users.
In this article, I’m going to go through the process of activating applications and provide some behind the scenes info on the various processes around these issues. A slightly related article is my posting on Sage 300’s multi-version support.
To access a module, it first has to be installed. Generally from the installation DVD image, you can select (or de-select) most modules. There are some dependencies, so if you install Purchase Orders then that implies you need a number of other accounting modules to be installed as well. Each accounting module gets its own folder under the Sage 300 installation folders. These are formed by a two character prefix like GL or PO followed by a three letter version like 61A (not the year based display version). Generally all accounting applications are created equally and the Sage 300 System Manager becomes aware of them by the presence of these folders and then gathers information on the application by looking for standard files stored in these folders (like the roto and group files).
When you create a new company in Sage 300, the only applications that are there by default are Administrative Services and Common Services. Below is the data activation screen:
This program lets you choose which applications to activate into the database from the list of all installed accounting modules. When you select a given module, you may need to specify a few extra parameters in the next screen. The program will also tell you about any dependencies that are required and select these for you. Then when it goes to activate the programs it call the activation object in each selected application to let the application do whatever is required to add it to the system. This usually involves creating all the database tables for the application along with seeding any data that is setup automatically (like perhaps a default options record). If you are upgrading to a new version it will do whatever is required to convert the data from the old version to the new.
You can run this program as many times as you like, so if you don’t activate something, you can always come back later and activate it then. Just keep in mind that after you activate something, you can’t de-activate it. We do put up a fairly strong message to ensure you back up your database before running data activation. Not all database conversions can be transactioned, so if data activation does fail, you may need to start over from a backup, though often you can fix the problem and run activation again to finish.
For our hosted versions, you don’t need to install anything and you don’t actually see the data activation screen, you select what you want from a web site and then the database is provisioned for you. For the on-premise version, installation and activation is usually performed by the business partner.
If you just activate General Ledger, then you will only see General Ledger on the desktop and won’t see icons from anything else that is installed.
Also notice that “Create Revaluation Batch” isn’t shown because I haven’t enabled multi-currency for this database.
Other Separate Features
Some modules like multi-currency, serialized inventory, lot tracking and optional fields aren’t installed via data activation. The database support for these modules is always present. To be able to use these you need to install the license for the module and then you can enable the functionality within the other applications. For instance to turn on multicurrency you need to enable this in the Company Profile screen in Common Services.
Until you do this, all the fields, functions and icons for these will be hidden and won’t clutter up your desktop or entry forms. So if you don’t really need these, then don’t turn them on. Also keep in mind that once you enable these features, you can’t turn them off again, they are turned on permanently.
In one regard Sample Data is a bad example, since it has everything possible activated and enabled. Since it comes this way, applications will be activated even if they aren’t installed. This sometimes causes funny problems because some functions that communicate between modules won’t work in this case.
Sample data is a great way to show any feature in Sage 300 ERP, but in one regard it’s rather misleading. It tends to always be run as the ADMIN user and hence always shows all possible icons, fields, and functions. This tends to make the product look much more complicated that it really is in real world usage. In the real world, you wouldn’t activate things you don’t need and in the real world the user wouldn’t have security access to everything so again many things would be hidden and their workspace simplified.
We don’t normally allow deactivating an application or turning off a feature like multi-currency. The reason is that data integrity problems could theoretically occur if you do, since for instance if you have processed payroll checks, then bank would need payroll present to reconcile those checks and if you deactivate payroll while there are un-cleared checks in bank, then you will never be able to reconcile these checks. There are many cases like this so as a general good practice protection we prevent de-activation.
But the developers in the audience will know there is a back door. In the Sage 300 ERP SDK there is a “Deactivate” program that will deactivate an activated application. It does this by dropping all the tables for the application from the database and removing its entry from CSAPP. It does not do any cleanup of data that might be in any other accounting application’s tables. This is a great tool while developing a vertical accounting module for Sage 300, but if you use this on an production system, really be confident that the offending application hasn’t been used and you aren’t going to leave corrupted data in all the other modules as a result of removing this one. Again backup before proceeding. Similarly turning off things like multi-currency by editing the CSCOM table in SQL Enterprise Manager has the same caveats.
Generally you want to keep your accounting system as simple as possible. Modular ERP systems like Sage 300 ERP have a great breadth of functionality, but most companies only need a subset of that, which is relevant for their industry. So be careful to only select what you need and keep your system a little simpler and easier to use.
In last week’s blog post, one of the topics covered was an exercise in predicting what things will be like in ten years. We didn’t discuss any negative impacts of technology like environmental collapse due to gross consumerism. The other thing that wasn’t discussed was the prospect of the so call technological singularity occurring in the next ten years. The singularity is defined as the point at which computers (or networks of computers) become self-aware and exceed human intelligence.
This has been a popular topic in Science Fiction for some time. Interestingly the term is often attributed to John von Neumann who spoke of “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”
We’ve all felt how change has been accelerating. As change gets faster and faster, predicting the future becomes harder and harder. The idea behind the singularity is that you cannot predict what will happen on the other side of it. Basically as computers/networks become self-aware and more intelligent than us, then things will start to change so quickly that all our predictions will be out the door.
I think this could happen in the next ten years, there are many projections like the following chart that give good evidence that we should reach the prerequisite level of complexity between 2020 and 2040.
Robert J. Sawyer, the popular Canadian Science Fiction writer has an excellent trilogy of books, his WWW series consisting of Wake, Watch and Wonder which follow a scenario where the Internet becomes alive. This series is certainly a very positive view of this happening and I highly recommend reading this series (disclaimer: I haven’t read the third one yet).
Mathematician and Science Fiction writer Verner Vinge wrote a very influential essay on the singularity here. A lot of ideas from the essay are woven into his Science Fiction novels like “A Fire Upon the Deep” or “Rainbow’s End”. I greatly enjoy Verner’s novels and highly recommend them.
In fact companies like Google are actively working to make the singularity happen. Both Google founders Larry Page and Sergey Brin are driving projects within Google to achieve self-awareness and intelligence in the Google data centers. In fact both put in a lot of personal money to found the Singularity University.
You have to think that the company bringing self-driving cars to market, having personal concierge software like Google Now and with their giant data centers and huge resources are well positioned to bring the Singularity to life (or have they already done it?).
Of course there are many Science Fiction works which portray a very negative vision of this happening. In particular the emergence of Skynet in the Terminator series, the enslavement of people as power generators in the Matrix series, as well as Hal in 2001. Generally these set up quite good action movies, but I’m not really sure the types of wars envisioned here are too likely. I tend to think that most negative outcomes for the future would be caused by our own doing, whether war or environmental collapse.
Accuracy of Predictions
Predicting the future has always been very inaccurate. We always predict things will happen much faster than they do. Putting years in novels like 1984 or 2001 quickly shows how slow things can develop. Interestingly back in the 60s for the original Star Trek, people thought we would have warp drive in a few years, but a talking computer that knows everything would be impossible and was quite implausible. Interesting how things do change.
I find news shows that make New Year’s predictions and look at the accuracy of last year’s predictions quite entertaining. Usually all the predictions from last year are wrong. Similarly if you study statistics and the accuracy of predicting trends by projecting graphs and such, you see that the mathematical inaccuracy grows extremely fast. So the graph of computer power above looks quite compelling, but believing the projection it makes is strictly an act of faith and intuition with no mathematical backing.
Is it Possible?
There is a lot of controversy about whether true human type self-aware intelligence is possible with just a Turing machine type computer. There is a lot of skepticism that some other secret sauce is required. Roger Penrose believes that our neurons actually aren’t just like computer logic gates, but that there are quantum effects going on that are necessary to go beyond a Turing machine.
I studied the transitions from stable simple systems to complex chaotic systems as part of my Master’s Degree. As dynamic systems make the transitions from stable simple predictable systems to chaotic systems, they don’t necessarily become completely random. It’s very common to get new stable emergent states that were completely unpredictable from the initial analysis.
I believe that self-aware intelligence is possible with just a Turing machine. That as our computing power and networks get more and more powerful and complex, that Chaos Theory will start to apply and that intelligence is in fact some sort of strange attractor that will eventually emerge.
Like we get amazing graphic images of Fractals from iterations of very simple equations, we get amazing unpredictable but stable complexity emerging. To me this will be the foundation for intelligence.
Making predictions is fun, but usually not accurate. I find it fascinating to think about how intelligence might emerge on the Internet. It’s not just being left to emerge or evolve on its own, that in fact there are some very rich and powerful people putting quite large amounts of resources into making this happen.
I do think that once this happens (if it happens), that it will be a singularity and that we have no idea how things will progress past that point.
This past week I had the privilege of attending a Sage Leadership conference that was put on for about 40 of the key Sage North American R&D Leaders. It was held over two days at the Newport Beach Hyatt Hotel. Newport Beach is a beautiful spot with Balboa Island and Back Bay in easy walking distance along with a number of good restaurants. The intent of the conference was to give people a chance to get away from the daily grind of problem solving and routine management to really concentrate on leadership. This is very important at Sage right now as the company is going through a large number of changes to adapt to the fast changing technology/societal landscape that we are now living in.
We had an artist drawing visually what we were doing, so in this blog posting I’ve added a few of her drawings on the relevant topics. They are really quite good and much better than getting an e-mail of PowerPoint presentations.
The conference got off to a rocky start when the group was asked to stand if you could recite the Sage Vision statement and only a couple of people on the executive committee stood. This then led into a discussion about the Sage brand and the Sage Vision.
Just to be clear, the Sage brand isn’t just the Sage logo and the Sage Vision isn’t just some feel good marketing text that we put under the logo on our brochures. These aren’t about marketing at all, they are about defining the company that we want to become. The Sage Vision statement is:
To be recognized as the most valuable supporter of small and medium sized companies by creating greater freedom for them to succeed.
We then spent time breaking apart and analyzing this statement and then ensuring that what we are working on today aligns with this vision. Some of the key parts of this statement are that we will be recognized, that we do provide value in everything we do, it defines our market segment and defines our goal. We want to give our customers freedom from dealing with accounting matters so that they can concentrate on their real business whatever that may be.
After fully drinking the vision cool-aide, we then went about discussing and talking about leadership. A lot of this revolved around being a confident leader. In our ability to inspire our co-workers and to get all the cats moving in the same direction.
We discussed leadership attributes that we at Sage do well, but more importantly we spent more time discussing the leadership attributes that we are lacking and how to develop these.
The diagram then gives a good representation of what was discussed:
Rather than doing a Clint Eastwood and having the customer represented by an empty chair, we actually invited a couple of customers to kick off the second day. We started with a question and answer session to learn about their businesses, to learn about the problems that they are having, about what is working well. Not just for their ERP system but for their whole business in all its aspects.
We were asked to take notes and then when the Q&A was over, the second part was to have our own Shark Tank show. Each table became a team (about five people each) and had 45 minutes to come up with a product idea to pitch to the sharks which in this case were our two visiting customers. They then judged the ideas and awarded a bottle of monopoly money to the team that they wanted to invest in.
This exercise was a lot of fun and was a good exercise of the creative juices. The winning ideas are then going to be fed into our innovation process to see if other customers also think they are good ideas.
It was interesting to watch, since this was entirely developers, that they fell into the same traps that we usually blame Product Management for, namely answering “yes it can” to every question and under pressure on pricing to keep lowing it until it’s a free service.
A primary goal of the conference was to foster more innovation in everything we do. One fun exercise was to have all the tables go off into their own groups and put together a play or skit on a day in the life of someone using technology ten years into the future. I blogged on my vision of ERP in 2020 a couple of years ago here. Certainly my vision of ten years into the future was way more conservative than anything envisioned here. Center stage went to voice interaction and general direct input into the brain. In a way projected where technologies like Siri and Google Now along with Google Glasses will be in ten years.
The key theme is that no one will be keying in ERP transactions anymore. You will just do business by chatting and gesturing, sign contracts by shaking hands and all the debits and credits will happen magically (via technology) in the background.
The conference was a great deal of fun and highly successful. It was good to meet a number of people I’ve only dealt with via e-mail in person finally, as well as a number of people I didn’t know at all. It was good to ensure we are all aligned and working to the same vision and that we are all innovating together toward a common goal of really providing that freedom for our customers to succeed. But more importantly there are a number of things for me to start doing immediately on returning to the office.
Recently I’ve been talking to many people about various techniques to develop portable mobile applications. In the good old days of the 90s with the Wintel monopoly usually you could just develop for Windows and you would reach 99% of the market. The main challenge was just adapting to new versions of Windows where you would get things like UAC thrown at you.
Now suddenly we are developing for various Windows devices, various Apple devices and various Android/Linux devices. Plus we have some other contenders like Blackberry clamoring for our attention. The market is now highly fragmented and all of these have considerable market share.
I develop business applications and the functionality I’m most interested in has to do with ERP and CRM workflows. This means I’m not writing games, although it would be fun to produce a game like “Angry Accountants” or “ERPville”.
I know I’ve blogged about mobile development a few times like here and here; but my thinking on this keeps changing and I’m still not happy with the whole situation. There are many mobile frameworks and I’m only touching on a couple of representative ones here. I’ve got to think there will be a better solution, but until then I feel like ranting.
There is an appeal to going native. The native development environments are really excellent. I’ve been playing with Apple’s XCode development tools for OS/X and iOS development and they are really amazing. They’ve progressed a lot since I last saw them over 20 years ago when I worked for a company that did NeXTStep development for the NeXT cube. Similarly Visual Studio 2012 for Windows 8 development is really quite good and so are all the Android tools.
If I only needed to development for one of these, I would be happy with any one of them. But keeping several in my brain at once really hurts.
You get the best results for the given platform with any one of these, but you don’t really get anything reusable except the basic design. All the platforms use a different object oriented extension of C (namely Objective C, Java and C#). All the platforms have different operating system functions and different separations between what you do in the application versus have as a service.
One surprising thing I found from talking to people was that the idea of writing as much as you could in C. All the main platforms use extensions of C and all support compiling and running C code. This reminds me of the old days where you tried to write a portable application for Mac, Windows and Linux by isolating the operating system dependent parts and then writing as much code as possible in good old portable C. Funny how what was old can be new again. But then it was a good idea back then, why wouldn’t it be a good idea now?
The other problem with Adobe is that they are the leading vendor in producing software with giant security flaws. This means they are more likely to be blocked or dropped from platforms. It is also a big risk for app development since your app could be tarred by Adobe’s problems.
Xamarin takes the Mono project and ports it to mobile devices like iOS and Android. The goal then is that you can develop a C# Windows application that will also run on iOS and Android. We tried Mono as a way to move some .Net projects to Linux, but just ran into too many problems and had to give up. As a result Mono has left a bad taste in my mouth so I’m inclined to avoid this. I also wonder how much code you will have putting the .Net runtime on top of the native iOS or Android operating systems. Is this just going to have too many layers and is it just going to be too fat and bloated?
If they can pull it off with high quality and compatibility there is potential here, but I suspect, like Air, you will just get a big non-standard mess.
Unfortunately all the vendors have a vested interest in their app stores (like iTunes). Vendors like Apple, Google and Microsoft make 30% off all software sold through their stores. They make nothing on people running web applications from browsers. As a consequence quite a few native platform functionalities are held back deliberately from the web. Then they market hard that for the best experience you must use a native app form their store or you are getting a second rate experience. Strangely the reverse is often the case where the app is just providing a subset of some web site and you lose abilities like being able to zoom.
In the current market/environment it’s very hard to compete against native apps with web apps which is really too bad. I think at some point the app store monopoly will fall apart, but that is today’s reality.
The main risks with Phonegap is that it usually lags the native apps in adoption of new operating system features, Apple may at some point start rejecting apps made with Phonegap and Adobe may start adding proprietary Flash like technology.
Beside these drawbacks the other problem is that your app is still made out of Browser controls and not the UI widgets that are part of the underlying operating system. You can style away a lot of differences but discerning users will be able to tell the difference.
I’m still frustrated. I’m not really happy with the quality of apps produced by the cross platform technologies and I don’t like developing the same thing multiple times using the native SDKs.
I also find it a bit monotonous to develop the same program over and over again for iOS, Android, Blackberry and Windows.
There was a discussion on LinkedIn the other day, that was started since the latest version of Sage 100 ERP only allows one copy of itself to be installed on a given computer. Many programs operation this way such as most Microsoft products and other Sage products like Sage 300 ERP. The main reason for this is to avoid confusion for users when they are using integration technologies like COM or .Net. Since then it’s easy to know what you are talking to when you integrate from another program. This is also how the Windows Installer works, so if you want to use this technology then this is what you get.
But the topic came up as to what to do to support multiple customers? The answer given was to use virtualization. We use this fairly extensively here at Sage for Development, QA and Support. This blog posting is to cover a bit more fully our uses of virtualization and some of the things we have discovered along the way.
The Sage 100 and Sage X3 groups use Oracle VirtualBox. This one is nice because it’s open source (Oracle acquired it as part of Sun). I’ve run VMs created with this, but never created one myself or have too much experience with it.
The Sage 300 team uses VMWare. It used to be that you could use the VMWare player for free, but now it is only free for non-commercial use, but at least it’s fairly cheap. Generally you only need the Player and not the Workstation version. One nice feature is the unity feature which does an amazing job of integrating the virtual environment with your desktop environment which is good for demo purposes.
For server based VMs we use VMWare because our experience is that the memory usage is much better than the Microsoft Windows Server versions (but I haven’t played with Windows Server 2012 yet). The MS Server ones tend to force a lot of locked memory and you can’t run as many VMs. Our support department keeps a library of all supported operating systems times all supported versions installed, so if a client problem comes up say running XX version 3 on Windows XP 32-bit, then we boot up the right VM and try to reproduce the customer’s problem.
Generally we find it useful to create a base operating system image like Windows 7 (64-bit) and keep a clean copy that we update every now and then with Windows updates. Then when we want a VM we just get a copy of the base operating system and install what we want on top of it. (We also keep some images of popular operating systems with office and SQL Server as a better starting point). Generally to give a quick way to get running when a need arises.
We used to use MS VirtualPC a lot, but have moved away from it because MS doesn’t seem to be updating it anymore and it doesn’t support 64-bit client operating systems. This one is included with MSDN subscriptions, so it you have one of these, you probably have access to it.
It seems Microsoft is repurposing its VirtualPC software to their XP Mode feature to allow you to run Windows XP only software easily on Windows 7.
Client Operating System Licenses
Generally all the developers at Sage have an MSDN Universal subscriptions so this gives us the licensing to do what we need with the client operating systems. But for most development partners, there is a lot of benefit in having an MSDN subscription yourselves.
One disadvantage of virtual machines in the past has been how large they are (usually around 32Gig). This uses up disk space fast, but with cheap 3TB hard drives, this doesn’t seem to be much of a problem anymore.
I’ve found the main thing you need for good performance in virtual environments is lots of memory. If your computer has 8Gig RAM then you can allocate 4Gig to the VM and still have 4Gig for your base operating system. Even though I find frequently switching back and forth between things in the VM and things in the base operating system can be slow, so I like to work for longer periods in on or the other.
Also quite a few laptops have hardware virtualization support turned off by default, going into the BIOS setup and turning this on can speed up VMs quite a bit.
To me virtualization software is quite amazing. I’m astounded that I can just run Windows 8 or Linux easily on my Windows 7 laptop. I think virtualization software has come a long way and is still progressing quickly. If you haven’t tried it out recently and you need to keep things separated, then you really should try one of these out. It saves a lot of headaches not having to worry about the installation of one thing messing up something else you have installed.