Archive for May 2012
I’ve previously blogged on the enhancements for the framework for creating custom SData feeds for applications here and here. In this posting I’m looking at enhancements to our core SData protocol support. We’ve been working hard to add more features and correct some inconsistencies and bugs.
The main purpose of this exercise is to make SData work better for integrators and to make sure the Sage 300 ERP SData feeds work well with other Sage tools like the Sage CRM dashboard and the Sage Argos Mobile Framework.
Generally SData features are mostly of interest to programmers. However some, like this one, enhance existing integrations between different products. Global schema is a mechanism to return all the SData meta-data for a dataset (company) in a single call. In version 6.0A, you could only get the metadata for one SData resource per call. Rather esoteric. But having this enhances our integration to the Sage CRM SData dashboard. Previously when you created an SData widget pointing to an Sage 300 ERP SData feed you needed to specify the $schema for a specific feed, something like:
Now you can give the $schema to the company using an URL like:
Which means you don’t need to know the resource component of the URL. In Sage CRM it looks like this, first you give the URL to the global schema:
Then you get a list of SData resources to pick from in a more human readable form:
Previously you only got the feed you specified. Then you select a feed and hit next to choose the fields you want from that feed.
SData Validation Tool
Sage has written a tool that will validate the correctness of SData feeds. This tool is available here (you need to scroll down to near the bottom of the page). The intent of this tool is for anyone, whether internal or external to Sage to be able to validate any Rest web services for SData compliance against what is described in the spec at the SData Website. This tool was around in the 6.0A days, but it needed work. Back then 6.0A passed the feed validator. With the new tool 6.0A has a lot of problems reported against it. With 2012, quite a bit of work went into making our feed compliant. Which means you can expect them to work as the specification states and integrations with other SData aware products and tools becomes much easier. This tool is continuously being updated and will probably auto-update itself as soon as you install it. Below is a screenshot. Hopefully by release a few of the remaining errors will have been corrected.
As part of our Sage 300 ERP 2012 development we tested Argos on our SData feeds and produced a sample mobile application.
As part of this development we fixed a couple of bugs and made sure our SData support works well with the Argos SDK. I’ll write a future blog posting on more details on the Argos SDK and how to write mobile applications for Sage 300 ERP. However if you are interested in Argos, you can check it out, since the whole project is open source and available on github:
- argos-sdk: https://github.com/sage/argos-sdk
- argos-saleslogix: https://github.com/sagesaleslogix/argos-saleslogix
- argos-sample: https://github.com/sagesaleslogix/argos-sample
We finished implementing e-tags with this version. These allow proper multi-user control when you have multiple sources updating records. Basically when you read a record, it returns an e-tag which has the date and time the record was last modified. When you update the record this e-tag is included with the update message and then the server can see if the record has been modified by someone else since you read it. This detects the multi-user conflict. Sometimes the server will just merge the changes silently and everyone is happy, sometimes the server will need to send back the dreaded “Record has been Modified by Another User” error response.
Using Detail Fields in Select Clauses
In 6.0A, if you read an SData feed with header and detail components (like O/E Orders), then you got back the header fields and links to the details. Even if you specified detail fields in a select clause. This meant if you wanted both the header and detail lines you needed to make two SData calls. Further this was annoying because it meant the format that you get back reading records is different than how you write the records, so you would need to combine the separate results from the header and details back together to do an update or insert. Now if you specify detail fields in the select clause you will get back all specified fields in the XML package, which will likely be a header with multiple details all returned for the same call. This saves an SData call, but further it’s much easier to deal with, since now you have an already correct XML template for manipulating to do future inserts and updates.
Define Your Own Contracts and Have Your Own Classmaps
In version 6.0A, the only contract we supported for SData feeds created by Sage 300 ERP was the accpac contract. Now in the classmap file you can specify the contract your feeds belong to. This has always been in the classmap files, it just didn’t work. This means you can ensure any feeds you define yourself won’t conflict with anyone else’s.
Another problem in 6.0A was that to create your own feeds, you either needed to be an activated SDK application with your own program id, or you needed to edit one of our existing classmap files. This was annoying since your changes could well be wiped out by a product update. Now you can copy your own classmap file into an existing application (like O/E), you just need to name it classmap_yourownname.xml and it will be added to the defined SData feeds.
Further all the feeds that make up the accpac contract were oriented to the Orion UI work. They weren’t necessarily a good fit for people doing general integration work. So we are creating a new contract which will have the main master files and documents in it that is oriented towards integration work and stateless operation.
SData continues to be a foundation technology that we are building on. Quite a lot of work has gone into improving our SData support in Sage 300 ERP for the forthcoming 2012 release. This will allow us to leverage many other technologies like Argos to accelerate development.
We introduced the Sage 300 ERP Inquiry tool in our 6.0A release. This tool was part of the new Web Based Portal. I blogged about it here, this blog was written before release when we were call it the Adhoc Query tool. With the 6.0A release we had query templates for General Ledger (G/L), Accounts Payable (A/P) and Accounts Receivable (A/R). With our upcoming 2012 release, we are adding query templates for Inventory Control (I/C), Order Entry (O/E) and Purchase Orders (P/O).
The new query templates (or data domains) that are being added are:
- Inventory Items
- Inventory Item Transactions
- Order Entry Invoices
- Order Entry Sales History
- Purchase History
- Purchase Orders
Remember that the intent of the Inquiry tool was to be easy to use, so anyone can inquire on their data without requiring any assistance or support.
Let’s look at a few simple examples. For instance, say you want to know everything that our favorite customer, Ronald Black, has purchased from us? We can then use the “Order Entry Sales History” template and select our customer as 1200 from a “Finder”, and immediately see a list of everything he purchased. We can then add a totals line to see the total. If we wanted to we could group this by year or period and see subtotals for these.
Suppose we wanted to know what our total sales of “Halogen Desk Lights” is and who is purchasing them?
These were just two simple examples using Order Entry Sales History. Hopefully with these data domains for the operations modules, you can get answers to the questions you have about what is happening in your company.
The intent is that you can inquiry and report on questions to do with your inventory and operational transaction histories to help you with projections, better manage inventory levels and to direct marketing to your customers based on their history. Hopefully the P/O queries will help you better manage your vendors and to help your company control costs by having more visibility into its purchasing patterns.
Remember you can print these, export these, choose the columns and sort order. You can have any number of selection criteria; there are five types of totals that can all be grouped by a field. Again the primary idea is keep the operation of this screen really simple, so anyone can ask questions on their data this way.
If you still need additional data queries (data domains) you can get a developer to create these for you. I blogged on how to do this here. Any of these that you created previously will continue to work. Just beware that you will need to move them to the inquiry61a folder when you upgrade. Unfortunately there is still the limitation that you can’t add menu categories to the Inquiry menu, but at least now there are 3 more existing places to choose from.
We continue to fill out the web based functionality we introduced in Sage 300 ERP Version 6.0A. This is just one of the many new features that our next release Sage 300 ERP 2012 provides.
Sage is always redefining and working to improve its software development methodologies. We’ve transitioned from using Waterfall to using Agile development. We’ve incorporated User Centered Design into all our development. We’ve spend much more time having everyone in the organization connecting and talking to customers. We bring customers into our offices for “coffee with the customer” chats, where the whole development organization can hear what is working well and what is causing our customers pain.
We are now endeavoring to transition our development process to be more upfront User Centered Design and to introduce much more creativity into our processes. To some degree having creativity and processes in the same sentence sounds contradictory; however, we do need to have organized creativity that results ultimately in shipping software. The goal is still shipping software and not producing PowerPoint’s. What we want to do is ship software that aligns very closely with what customers require and which delights users in how appealing, friendly and easy to use it is.
What we are really trying to avoid is:
which unfortunately happens in far too many products.
We started this process with a number of “idea jams”. We ran these as all day events at our various development center campuses. We picked half a dozen product categories, like manufacturing, and created a team for each one of these. Participants then volunteered for a team. The team then had the day to generate ideas (usually 100) then narrow them down to the top three and prepare a business case for their ideas. These were then presented to the whole group with some very original and animated presentations. Then everyone voted on which they felt were the best.
We don’t only take ideas from our idea jams, but also from customer feedback, competitive analysis, disruptive new technologies, business partners, development partners and any other source we can think of.
So out of these processes we accumulated hundreds of great ideas. Now what?
Narrowing Our Focus
Now we want to pick the best ideas to actually implement. So how do we pick and choose? As a first pass, the Product Management teams picked the twenty or so best ideas.
For the second pass we set up interviews with the CEOs of companies (generally of companies that already run our products). We hold these interviews as Webinars and generally for each interview run three ideas past the CEO using artist’s conceptions, mock-ups and verbal descriptions. Basically we want ideas that will excite the CEO of a company. After all the CEO has the ultimate buying power and if the CEO wants our product then we have the best chance of successfully selling to that company.
As you can imagine, getting an hour of a CEOs time for this sort of interview, can be quite difficult. CEOs are very busy and often barraged by huge amounts of sales pitches and spam on a regular basis.
Call to Action: If you are a CEO or know a CEO who might be interested in these sort of interviews, please contact me, by e-mailing me at Stephen.Smith@sage.com.
The following diagram shows this narrowing down process. We are now moving a number of ideas into the “Experience Testing” phase. Where we really want to focus on the overall experience to ensure we are delivering great business value to our customers in a very pleasing package.
All this initial idea generation, customer validation and experience design then become the first steps in an overall Software Development Process as shown in the next diagram.
This diagram emphasizes the initial phases, so the usual Agile Product Development then is the black box between “Experience Testing” and “Early Adopters”. However with modern development techniques all the boundaries in the diagram are very blurry and there is quite a bit of iterative improvement in each phase.
Basically we want to keep getting continuous customer feedback through the whole process. We never really know how well something will be received until the customer is running the real product on real production data in their real environment. We need to get to that phase as quickly as possible. Development needs to be oriented to delivering a minimum viable product as quickly as possible to get it to the early adopters. This then leads to real feedback and to the all-important “persevere or pivot” decisions that need to be made in any innovation process.
In any iterative process with lots of feedback, it’s very important that we achieve lots of “validated learning”. Where all the truths from all this feedback are documented and incorporated going forwards. So we keep moving forwards and don’t just iterate in circles. The Lean Startup people have very good processes for doing this, so you really do learn from your mistakes and don’t just keep repeating them.
The goal of this initiative is to make Sage’s product development process more innovative and creative. To incorporate much more structured design into all aspects of development. To really encourage far more stakeholder involvement in all parts of the software development process. To really get and incorporate feedback as validated learning’s that greatly enhance our products.
And remember anyone that is or knows a CEO that might like to participate in our concept testing, please let me know.
It seems that every day there are more cloud providers offering huge cloud based computing resources at low prices. The sort of Cloud providers that I’m talking about in this blog posting are the ones where you can host your application in multiple virtual machines and then the cloud service offers various extension APIs and services like BigData or SQL databases. The extension APIs are there to help you manage load and automatically provision and manage your application. The following are just a few of the major players:
- Amazon Web Services. This is the most popular and flexible service. There are many articles on how much web traffic is handles by AWS these days.
- Microsoft Azure. Originally a platform for .Net applications, it now supports general virtualization and non-Microsoft operating systems and programs.
- Rackspace. Originally a hardware provider, now offers full services with the OpenStack platform.
- VMWare. Originally just a virtualization provider, has now branched out to full cloud services.
There are many smaller specialty players as well like Hiroku for Ruby on Rails or the Google App Engine for Java applications. There are also a number of other large players like IBM, Dell and HP going after the general market.
All of these services are looking to easily host, provision and scale your application. They all cater to a large class of applications, whether hosting in the cloud a standard Windows desktop application, or providing the hardware support for a large distributed SaaS web application. Many of these services started out for specific market niches like Ruby or .Net, but have since expanded to be much more general. Generally people are following the work of Amazon to be able to deploy seamlessly anything running in a virtual machine over any number of servers that can scale according to demand.
Generally these services are very appealing for software companies. It is quite expensive and quite a lot of trouble maintaining your own data center. You have to man it 24×7, you are continually buying and maintaining hardware. You have to have these duplicated in different geographies with full failover. Generally quite a lot of activities that distract you from your main focus of developing software. Fewer and fewer web sites are maintaining their own data centers. Even large high volume sites like NetFlix or FourSquare run on Amazon Web Services.
Which to Choose?
So from these services which one do you choose, how do you go about choosing. This is a bit of game where the customer and the service provider have very different goals.
For a customer (software developer), you want the cheapest service that is the most reliable, high performance and easiest to use. Actually you would always like the cheapest, so if something else comes along, you would like to be easily able to move over. You might even want to choose two providers, so if one goes down then you are still running.
For the service provider, they would like to have you exclusively and to lock you in to their service. They would like to have you reliant on them and to attract you with an initial low price, which then they can easily raise, since switching providers is difficult. They would also like to have additional services that they can offer you down the road to increase your value to them as a customer.
Both Amazon and Azure look to lock you in by offering many proprietary services, which once you are using, makes switching to another service very difficult. These are valuable services, but as always you have to be careful as to whether they are a trap.
Amazon pretty much owns this market right now. New players have been having trouble entering the market. Rackspace suddenly realized that just providing outsourced hardware wasn’t sufficient anymore and that too much new business was going to Amazon. They realized that creating their own proprietary services in competition with Amazon probably wouldn’t work. Rackspace came up with the disruptive innovation of creating an open source cloud platform called OpenStack that it developed in conjunction with Nasa. They also realized that so many people were already invested in Amazon that they made it API compatible with several Amazon services.
OpenStack has been adopted by many other Cloud providers and there are 150 companies that are officially part of the OpenStack project.
This new approach has opened up a lot of opportunities for software companies. Previously to reduce lock-in to a given vendor, you had to keep you application in its own virtual image and then do a lot of the provisioning yourself. With this you can start to automate many processes and use cloud storage without suddenly locking yourself into a vendor or to have to maintain several different ways of doing things.
Advantages for Customers
With OpenStack, suddenly customers can start to really utilize the cloud as a utility like electricity. You can:
- Get better geographic coverage by using several providers.
- Get better fault tolerance. If one provider has an outage, your service is still available via another.
- Better utilize spot prices to host via the lowest cost provider and to dynamically switch providers as prices fluctuate.
- Have more power and flexibility when negotiating deals with providers.
- Go with the provider with the best service and switch as service levels fluctuate.
One thing that scares software companies is that as soon as they commit to one platform, then do a lot of work to support it, then suddenly have a new service appears that leapfrogs the previous services. Keeping up and switching become a major challenge. OpenStack starts to offer some hope in getting off this treadmill, or at least making running on this treadmill a bit easier.
Is OpenStack Ready?
At this point OpenStack doesn’t offer as many services as Azure or AWS. Its main appeal is flexibility. The key will be how well or the major companies backing OpenStack can work together to evolve the platform quickly and how strong their commitment is to keeping this platform open. For instance will we start to see proprietary extensions in various implementations, rather than committing back to the home open source project?
Amazon and Azure have one other advantage, and that is that they are subsidized by other businesses. For instance Amazon has to have all this server infrastructure anyway in order to handle the Christmas shopping rush on its web store. So it doesn’t really have to charge the full cost, any money it makes off AWS is really a bonus. By the same token Microsoft is madly trying to buy market share in this space. It is taking profits from its Windows and Office businesses and subsidizing Azure to offer very attractive pricing which is very hard to resist.
Apple uses this strategy for iCloud. iCloud runs on both Amazon and Azure. This way it isn’t locked into a single vendor. Has better performance in more regions. Won’t go down if one of these services goes down (like Azure did on Feb. 29). Generally we are seeing this strategy more and more as people don’t want to put their valuable eggs all in one basket.
With the sudden explosion of Cloud platform providers, suddenly there are huge opportunities for software developers to reduce costs and expand capabilities and reach. But how do you remain nimble and quick in this new world? OpenStack provides a great way to provide a basis for service and then allows people to easily move to new services and respond to the quickly changing cloud environment. It will be interested to see how the OpenStack players can effectively compete with the proprietary and currently subsidized offerings from Microsoft and Amazon. Within Sage we currently have products on all these platforms. SalesLogix cloud is on Amazon, SageCRM.com is on Rackspace and Sage 200 (UK) is on Azure. It’s interesting to see how these are all evolving.