Sage Advisor PEP
Sage Advisor is an umbrella term for a number of technologies and programs that are being rolled into all Sage products over their coming releases. Previously I blogged on the Sage Advisor Update project here. In this blog post I’m going to talk about the Sage Advisor PEP (Product Enhancement Program). The intent of this program is to actively gather program usage information to help Product Managers and Application Designers better focus their work and to do a better job designing and specifying new features for future versions.
This sort of information gathering is becoming very common in the software industry. Microsoft has a very extensive program that they call their Customer Experience Improvement Program. Mozilla Firefox has a telemetry program to gather performance data. Cisco has their Smart Call Home functionality. All SaaS applications do this big time. Every SaaS application logs every call to the web server and then can archive and mine this data endlessly. For SaaS applications you don’t have a choice, since you need to talk to the Web server to talk to the application.
It’s important to remember that participation in this Sage program is purely voluntary and easy to opt out of. Further no actual data from your database is ever transmitted. We are also subject to various governmental privacy laws such as HIPAA.
This feature has been around for a while now in one form or another. We introduced the “Call Home” feature in Sage 300 ERP 5.6A. This feature sent back information on which modules a customer had activated. It was a one-time message that was sent a few months after a new version was installed and activated. With version 6.0A we introduced PEP level 1, which sent similar information to Call Home but was sending it to the central Sage collection server rather than a special one only for Sage 300. With the forthcoming Sage 300 ERP 2012 release, we’ll be implementing level 2 which sends more usage data as explained below.
To do detailed user testing, a usability lab gives the best results, but this is quite expensive and time consuming for customers. The hope here is to virtualize some of this process and get a lot of useful data without all the manual work.
The goal of this project is to provide better information to our Product Managers, Usability Analysts, Business Analysts and other developers on how real users use our products. We need to know where users are spending their time, where they are productive and where they aren’t productive. We need to know which parts of the program are working well and which parts are causing problems.
Basically we want to guide our design and efforts based on data and not opinion. This is one of the methods we are using to gather real customer usage data.
Data we are Gathering
One of the things we want to determine is where users spend their time, so we are gathering data on what screens the user starts and how long they are in that screen. From this we can get hard data on which are the really heavily used screens and then spend more effort on improving these screens. Generally we know some screens users spend a lot of time in, like Order Entry, but we are looking for surprises here. Further we can see what combinations of screens people run, so if they always run A/R Customers at the same time as O/E Orders, then we can infer there is information in this screen required by everyone doing Order Entry and that to improve the workflow we should make this information more readily available in Order Entry. Generally this is a matter of simplifying workflows and making our customers more productive.
We want to simplify the parts of the program where users are having difficulty. To do this we are recording the usage of the Help. Basically recording all links to the help, this way we can determine the parts of the program that people are finding difficult and having to consult the help. Then we can work on the associated forms to make them more intuitive, so the user doesn’t need the help anymore.
Along the same lines we are recording all error messages displayed. This is to see if we can change the workflow, so the user doesn’t get errors. Also if we can pro-actively avoid error situations we hope to avoid a lot of support calls. For instance if after installing many people get a certain error that indicates things aren’t setup correctly, can we modify our installation program so people won’t run into this?
Many Sage corporate presentations start with a slide proclaiming we have 6.3 million customers. This is great, but now with Sage Advisor PEP that means we now have 6.3 million customers sending usage data to a corporate web server and all this data needs to be recorded and analyzed.
This starts to put us into the world of “Big Data”. I blogged about Big Data and ERP here. Currently we are gathering all the data into SQL Server, but this is already strained with only a few Sage products contributing. We are already moving the data from the SQL Server to a NoSQL database to perform data analysis. As the volume of data continues to grow we will probably need to replace the SQL Server with something more scalable and this is a classic use case for a NoSQL database. To me this is an exciting initiative to use and become familiar with Big Data technology. As Sage moves forward this will become a more and more important technology to gain expertise in.
We do take care that we won’t delay people using their business application to send usage data. We always start a new thread or program to transfer the data so we don’t block the main program for the user as it upload data. Also we don’t consider this data “crucial” so we don’t need to worry too much if some is lost because the system is too busy.
Gathering usage data is becoming more and more common in the software industry. Sage is stepping up our efforts to gather good usage data from all our products. The primary goal of this is to feed this back into the organization to improve our products and processes. To become more scientific in the ways that we improve our products.