Stephen Smith's Blog

Musings on Machine Learning…

Posts Tagged ‘azure

SEOS

with 2 comments

Introduction

Sage ERP Online Services (SEOS) was a system developed alongside the online/cloud versions of Sage 200 ERP (UK) and Sage Morano ERP (Spain). The purpose of this was to provide a number of common sets of functionalities for all Sage online applications such as creating new tenants and users, managing audit logs, managing cloud credentials and managing cloud resources. The system also handled common functions such as integration to the various Sage billing systems. Although SEOS was originally developed as part of the Sage 200 and Sage Morano online projects, it was always intended to be a general tool used by all Sage cloud products. SEOS has been in successful production with both Sage 200 and Sage Morano for some time now, it is now being adopted by various other Sage cloud products both in Europe and in North America.

A lot of what SEOS does is behind the scenes and customers and partners generally don’t see it as a separate service. But I thought it might be of interest to people to know what is or will be going on behind the scenes in our various cloud products. This article is just talking about SEOS in general and doesn’t make any claims as to which will be the next product to adopt SEOS or any timelines for such adoption. But for people using Sage 200 or Sage Morano, they probably already have some experience using the SEOS portal.

One of the key goals of SEOS is to automate everything so there are no manual processes in creating new tenants or users, but additionally it orchestrates disaster recovery, backup/restore and the auto-scaling of resources.

SEOS Services

SEOS has a number of different ways to interact with it. There is a web portal for DevOps/IS type people who run the system, there is a web portal for customers and partners and then there are a number of APIs for the various ERPs to communicate with. Some of the main components are shown below.

seos1

The main job an ERP has to do to integrate to SEOS is provide a provisioning engine. This engine runs all the time (actually at least two of them for high availability). This engine checks the SEOS job queue to see if it has anything to do. Tasks it might be asked to perform include creating a new tenant, creating a new user or suspending a tenant.

Bootstrapping the System

To create a new cloud instance of an ERP that is integrated with SEOS just requires the provisioning engine running. This is either setup manually or via some PowerShell scripts. Once the provisioning engine is going, everything else is done in the provisioning engine as it gets messages from SEOS. When it gets its first create tenant message, it will see that no application servers are running and create a couple (for high availability), it will create the database and do anything else that is needed for that tenant to be able to operate. For the first tenant that could mean creating quite a few storage, database, VM or other resources.

Then as tenants are added, the provisioning engine will be monitoring (with the help of SEOS) the usage of the system and will start additional resources (like application servers), if it determines that the system needs to scale to handle the additional load.

Thus nearly all the work of creating a large cloud system, possibly consisting of hundreds of VMs will all be brought about automatically via the SEOS/provisioning engine system.

This then helps with disaster recover, since when SEOS switches to the alternate data center, it handles switching the databases and basically bootstraps the whole process the same way. Note that databases are a bit different since they are already being replicated to the alternate site and you just want to switch the backup to primary and then use that.

Multi-Cloud

When the provisioning engine requires a new resource like an Azure SQL database or a new Web Role (VM), it doesn’t just go to the cloud API to get it (say using the Azure SDK). Instead it asks SEOS for the resource and SEOS creates it for the ERP. This way the ERP isn’t using the native cloud API, instead it just uses the SEOS API. This then opens up the possibility for hosting these cloud ERPs in different clouds.

Currently all the SMB ERPs are hosted in the Microsoft Azure cloud because we get very good pricing on this and it meets our needs extremely well. However we don’t want to put all our eggs in one basket and if conditions change dramatically, we can much more easily switch other providers. There are other reasons we may need to do this, for instance Azure doesn’t have a data center in Africa and we have a lot of customers in Africa, so we may need a provider closer than Singapore.

DevOps

DevOps is the group that runs all the various Sage Cloud offerings (its official name varies from region to region, but the idea is the same). Having DevOps manage dozens of cloud products all working different ways with different maintenance procedures would be a huge challenge. SEOS brings all these aspects together into one set of procedures.

Take logging for example. It’s easy for any application to generate huge log files for diagnostic and auditing purposes. There are lots of good APIs for doing this. But managing these logs is a challenge. The logs need to be archived for future reference. Generally there are several types of logs, like the Windows Event Log, and application diagnostic log and an application security log. All these need to be kept in a central spot including backing them up. There has to be easy ways to search them, say by tenant, event type or date time. Many people use Elastic Search to search their logs. SEOS provides a uniform way for managing these and automates most of the process. This way DevOps only needs to know one way of managing logs and not a separate way for each ERP. Plus SEOS automates the whole process, avoiding manual procedures and mistakes.

Billing Engines

Sage is a large company and operates around the world. Most of our products are charged for and the backend systems that do billing vary around the world. Various geographic regions have extra regulatory requirements and different business rules. SEOS handles providing usage data to the billing engine via a standard adapter. Each region has to write an adapter to interface to SEOS. Removing this burden of interfacing to the Sage billing systems is a huge benefit to the ERP teams who really don’t want to have to deal with these details.

Summary

If you are a partner or customer using Sage SMB Cloud offerings in Europe, then you have probably already seen SEOS and know something about it. If you are using Sage SMB Cloud offerings in other parts of the world, then you will probably start to see SEOS appearing over the next while. These will probably appear more as features or services, but these are being brought to you by the SEOS global initiative.

Written by smist08

August 9, 2014 at 3:34 pm

Looking Back on 2013 and Forward to 2014

with 6 comments

Introduction

2013 was a very busy year for all of us on the Sage 300 ERP R&D team. Most of us are located in Richmond, BC, but we have remote team members in Kelowna, BC, Victoria, BC, North Carolina, Chennai, India and Bangalore, India. As we approach New Year 2014 let’s look back on what we’ve been up to this past year.

Happy New Year 2014 replace 2013 concept on the sea beach

Product Update 1

We started the year by releasing the Sage 300 ERP 2012 Product Update 1. This included new features like Colorful Companies and a major new release of Canadian and US Payroll.

As always a lot of work goes into product sustainment. Fixing problems caused by new versions of Windows, strange interactions discovered with other programs and other bugs. Diagnosing and tracking down the root causes of these can be quite demanding work, but is completely necessary to keep our customers happy. Quality is always a high priority and always an ongoing concern.

Sage 300 ERP 2014

We have a core team that works on improving the Sage 300 ERP core product and they spent their time working on the Sage 300 ERP 2014 release. This version has been released to a limited VIP set of customers and will start to be publicized to the full Sage 300 audience in March.

I wrote quite a few articles on the various things in this release including:

Another major improvement in Sage 300 EPR 2014 that I haven’t blogged about yet, is the Crystal Reports Support. With this release we are bundling Service Pack 8 of the Crystal Reports 2011 runtime. This service pack has some major performance improvements over the version of the Crystal Reports Runtime bundled with the 2012 version. Actually someone in our office reverse engineered the Crystal runtime to figure out why it was becoming quite slow in some cases and was able to feed this information to the Crystal Reports development team (that is located nearby in downtown Vancouver) so that we could get this rectified. If you are having trouble with performance on the 2012 version, you can now get the Service Pack 8 from Crystal’s website and install this. With this service pack we now get performance that is consistently better than that experience with the older runtime bundled with 6.0A, but now with all the extra features and supported platforms that are provided as part of the their 2011 version. Generally being in the same city makes working with the Crystal development team easier, plus we have a lot of cross-over since Crystal Reports was originally developed as an add-in reporting tool for Accpac Plus.

With this release we add support for Windows 8.1, Windows Server 2012 R2 and Office 2013. We also have added support to run when Windows DEP is turned on. Generally updating anything to ensure we run well in any modern environment.

Windows 8 and Windows Server 2012 had a bug in their printer drivers that caused many fonts to be converted to bitmaps. This then defeated products like PrintBoss as well as things like Micr checks and anything else that is very sensitive to fonts. This problem was fixed by Microsoft in Windows 8.1 and Server 2012 R2. So if you are having trouble with this, make sure you upgrade these (which is free).

Sage 300 Online

Another major project was the new Sage 300 Online, which we unveiled at Sage Summit. This is a major revamp of our hosted offering. Now we host Sage 300 in the Microsoft Azure cloud and have greatly improved the manner in which we virtualize Sage 300. Authentication is handled by SageID which is also used for our connected services and Sage One. All the management, upgrades and tuning is handle by Sage so that customers don’t need to worry about this.

online

We are managing the site with a new DevOps team that consists of members dedicated to the site along with other members of R&D and IS. This is a new model for running the site, managing loads and keeping everything and everyone up to date.

Sage 300 Online has now started to on-board our first customers. At first we are starting out slowly to ensure everything keeps working fine, but expect to see a major push for this services in a few months.

We are now switching to a continuous delivery model, so expect to see improvement to this service being put on-line frequently.

Sage Connected Services

We have team members helping out with the Sage Connected Services. This year we released: Sage Mobile Service, Sage Mobile Sales and Sage Billing and Payments. We are now working on new versions of these as well as working on building out the Sage Data Cloud platform and bringing out further mobile and web applications.

The cloud part of these applications is developed for the Microsoft Azure PaaS platform. The mobile applications are written as native applications for iOS and in the future Android.

Our core team is responsible for maintaining the Sage 300 connector which transfers data back and forth from an on premise database and the Sage Data Cloud.

Next Year

As we go into 2014 we are building on all these initiatives. The general roadmap is still what I outlined here however as we go into the year, many of the details will be announced. Such as what is in the next version of Sage 300 ERP, what will be in all the Product Updates and what will the next version of Sage 300 Online actually look like.

Besides product changes, we will be having a lot of process changes. Now that Sage 300 Online is launched, we will be living in a continuous delivery, DevOps and cloud first world. Seamless invisible upgrades will be the norm, features can be deployed to all our cloud customers instantly and frequently. Ensuring that all these new processes run smoothly will be one of our major challenges in 2014.

Summary

Sage 300 ERP is an important strategic product for Sage. There is a lot of exciting development going on. We accomplished a lot in 2013 and look forward to even more in 2014. I am really looking forward to attending our various conferences this year to show off all the exciting work that is currently going on behind the scenes.

Written by smist08

December 28, 2013 at 6:33 pm

Sage 300 Online

with 13 comments

Introduction

At our Sage Summit conference we officially rolled out our new Sage 300 Online service. Sage 300 ERP has had an online hosted version for over ten years. The new Sage 300 Online service is meant to radically modernize our cloud version. It is now hosted in Microsoft Azure, sign on is via SageID, and the virtualization software is vastly improved.

We support all the standard Sage 300 ERP Modules, along with integrations to other Sage applications like Sage Exchange and host a number of third party ISV products. Financial Reporting and Business Intelligence is provided via Sage Intelligence (Sage Alchemex).

Using the cloud version of Sage 300 means that you don’t need to maintain your own servers or other hardware, you don’t need to be responsible for backing up your data and you aren’t responsible for data center support. Further you aren’t responsible for maintaining and upgrading the software. We install all necessary hotfixes and product updates. We even perform major version upgrades for you, which can be a major cost saving.

Modern businesses tend to be distributed over many geographic locations with many employees working from home or working while on road trips. Using a cloud based ERP allows all these people to access the central cloud based ERP from any place with Internet access. This is much easier than maintaining expensive and slow VPN type networks.

First we’ll run through how the service looks and then talk about some of the various aspects of it.

Usage

To access and run Sage 300 Online, you go to an URL in your browser that redirects you to SageID.

sage300online1

You now enter your SageID credentials and are signed into the system. This leads to the following page which gives a list of programs that you are allowed to run.

sage300online2

Clicking on “Sage 300 Online” will then launch the Sage 300 Online desktop. Below is a screenshot of the Desktop run from the landing web page and then running the Order Entry UI.

sage300online3

Notice that it now looks exactly like you are running Sage 300 locally. You no longer run a terminal server client which presents a new Windows desktop inside your existing Windows desktop.

SageID

When you first sign up for the service, you provide an e-mail address which will become your SageID. This first user is the administrator for your company and can invite other users from a SageID administrative website to use the system. SageID will then be your login for our site.

Your SageID will be your login for all Sage Cloud services and is also connected to the back end billing systems, so we can provide one bill for all your services attached to your SageID. This includes the Sage Mobile applications unveiled at Sage Summit. This will make reporting and billing very simple for cloud users.

Microsoft Azure

Our Sage 300 Online service is hosted in Microsoft Azure. These are Microsoft’s cloud data centers with locations all over the world. We use Azure due to its reliability and redundancy. If something were to happen to one data center we could operate out of another. Also the Microsoft network that connects these data centers to the Internet is extremely fast and have very low latency leading to great performance.

Microsoft Azure supports both PaaS and IaaS infrastructures. We use PaaS infrastructure for the initial web pages and for the databases. We use IaaS infrastructure to run Sage 300 ERP in a virtualized environment.

Microsoft Azure allows us to offer our cloud solutions to all our customers no matter where in the world they are located from data centers that are in their region or close to their region. Currently Microsoft has Azure Data Centers in the US, Ireland, Hong Kong, Singapore and Amsterdam with locations opening soon in Sydney and Melbourne.

Microsoft Virtualization Framework

Within the Azure environment we run a standard Microsoft virtualization environment that Microsoft has created for any datacenter. This is run on a number of Windows Server 2012’s. The virtualization environment is accessed via RDP8 which supports our SageID integration, supports virtualizing just the applications (rather than a Windows Desktop login), supports expected functionality like copy/paste, supports printing to local printers and supports transferring files to and from the cloud environment.

Microsoft Azure SQL

We run the Sage 300 company and system databases on Azure SQL servers which are a PaaS implementation of SQL Server. This database provides reliability performing every write to three separate places. Further as Microsoft develops out their cloud roadmap, many new features and services are planned for Azure SQL to improve the general robustness of cloud solutions.

ISVs

With our initial offering, we don’t support all Sage 300 third party solutions, but we do provide a set of some of the most popular. Generally the third party solution has to be able to run in the cloud which usually means it must be written with the Sage 300 SDK (the exception to make the rule is PrintBoss).

The third party products we support are: Orchid EFT and RMA, Pacific Technologies Purchasing Workflow and Funds Availability Wellspring PrintBoss, Aatrix Payroll Reports and Avalara Sales Tax.

Customization

Since all companies using this cloud solution are running the same Sage 300 programs, you cannot customize the system by installing any EXEs or DLLs, since then these would be used by all companies subscribing to the system. Similarly security is a much bigger concern in the cloud and we have to carefully regulate what can get into the system.

Also since this is a multi-tenant environment we can’t allow any arbitrary third party solution to be installed. For any ISV that wants to participate in this offering, we need to verify that their solution will run in our Azure setup correctly and not cause any problems for other tenants.

Summary

We’ve started letting people use our Sage 300 Online offering with a Sage Summit Preview program. Then in a couple of months we’ll be making it officially generally available. The Microsoft Azure cloud gives us much more scalability and global reach than we previously had. Integrating with SageID and using newer virtualization technology greatly improves the usability and convenience of our product. We are very excited about this project and are looking forward to adding to it as we develop it forward.

Azure and PaaS Versus IaaS

with 9 comments

Introduction

Sage 300 ERP has had a cloud version for over ten years now with Sage 300 Online. I blogged a bit about how this offering works here. Basically we offer our on-premise product in the cloud relying on Citrix and Terminal Services technologies to host and allow access. You are basically running your on-premise Windows desktop ERP application in the cloud. The only thing required on the local computer to access the software is the Citrix client component which is available for Window, Mac, iPad, etc. We are currently hosting this in Sage’s Atlanta data center.

We are now looking to produce the next generation of this cloud offering and we are looking to host it in Microsoft Azure rather than our own data center. There are quite a lot of reasons for this, like being able to deploy globally to various Azure datacenters or to take advantage of the Microsoft network which provides quite low latency access to these data centers. But the Azure feature we are going to explore in this blog posting is what does Azure PaaS give us over an IaaS offering? What are the differences and what are the considerations?

IaaS

IaaS stands for Infrastructure as a Service. In this model the provider, typically someone like AWS or Rackspace, provides all the hardware, storage and networking equipment. Then they give you the ability to create virtual machines in this environment (often providing some starting templates of typical servers). It is then up to you to install everything you need to run on these virtual machines and to maintain them fully going forwards, ensuring all patches are installed, upgrading to new versions of the operating system, etc. If you need a database server then it’s up to you to buy and install that database server on one of your virtual machines and it is up to you to maintain this server, back it up, provide geographic redundancy and anything else that is required to keep the data safe.

PaaS

PaaS stands for Platform as a Service. The goal of PaaS is to build on IaaS to have the vendor also take care of maintaining the operating system and the database server. So you don’t need to worry about patches, backing things up and various other chores. Microsoft Azure is an example of a PaaS offering (although they are getting into offering IaaS as well). Below is a diagram showing how the layers are often separated. This also gives the vendor the opportunity to provide more advanced monitoring and management tools for the platforms they are supporting.

Cloud_computing_layers

Complexities of PaaS

That all sounds good but how do you separate out the maintenance of the operating system from the maintenance of the application? Generally the way this is done is that whenever you start a VM you start with a fresh copy of the operating system which is then guaranteed to be at the most recent patch level. In the IaaS model you start with the operating system template once, then everything that happens to that image is stored to disk and if the image is stopped and started you continue from where you left off. In the PaaS model if your image is stopped and started, then it is restarted with a fresh image of the operating system and you need to install all your software before it can start processing.

This is why most PaaS systems tend to be very specialized in the type of applications they can run. Azure is built around running ASP.Net applications. Microsoft provides support in Azure and .Net to easily deploy your ASP.Net application to a new operating system image as it starts up. Similarly Ruby on Rails PaaS vendors provide support for installing the Rails application on their new image as it starts.

But doesn’t that mean you have to transfer huge amounts of files to the environment every time a VM starts? Well actually no. Azure provides a number of non-VM storage mechanisms for storing files in the environment. So for instance you can use blob storage or one of the other storage mechanisms to hold all your files and data. Then the scripts that start the VM would just attach these to the VM when it starts, or your program knows how to use the Azure SDK to connect to these when it runs.

PaaS databases are bit easier, since you just create and access your SQL Server databases in Azure SQL just like you would in a regular SQL Server, you just need to know the right information for the connection string to the Azure SQL server. But there are still issues to be aware of. For instance when you create your own set of VMs to make a production system, you can specify them in a VM group and ensure they all run in the same area of the Azure data center. This then ensures any latency of communication between these VMs is very low. But when you switch to the PaaS database server, all the servers that comprise, say Azure SQL all run together in another area of the data center and the latency from your server group to this server group is a bit higher than communications within your own group. Further you are then sharing these servers with other unrelated applications using these servers, so you are relying on the vendor to provide sufficient processing power to keep everything performant. But the best thing about PaaS databases is that you don’t need to worry about tuning the server or backing up all the data.

azure1

Moving Sage 300 ERP to PaaS

But Sage 300 ERP is not an ASP.Net application. So how can we even run in this environment? Fortunately, Microsoft provides both IaaS and PaaS services in the Azure environment. This means we can move our current Sage 300 ERP cloud implementation from our Atlanta datacenter to Azure by using the Azure IaaS infrastructure. We can then start to take advantage of various PaaS services one by one.

The whole Azure infrastructure provides a very rich API which allows you to use PowerShell scripts to control what is going on. Hence the ASP.Net support is really a set of scripts developed to handle typical deployments that is built into the Azure/Visual Studio SDK. From Sage 300 ERP we are developing similar scripts to create and control a Sage 300 ERP deployment. This way we can incorporate PaaS building block into our system. So we can have scripts to start and configure the whole system, scripts to add applications servers and so forth. We can even integrate these into standard DevOps tools such as RightScale or Chef.

Leveraging Azure SQL is perhaps a bit easier since you can use the same access technology we currently use. We have to add a bit of code to our database driver to handle some cases that can happen in the Azure environment but generally using Azure SQL is very similar to using regular SQL.

The system that makes up Sage 300 ERP cloud consists of quite a few VMs all performing various tasks. Switching some of these to PaaS based servers is very straight forward, so we can do this right away. For our application servers that have full Sage 300 ERP installed, we will continue to maintain these as IaaS for the foreseeable future since right now the overhead in installing full Sage 300 ERP is a bit high. But there are features coming in the Azure roadmap which will over time let us migrate these to PaaS as well. To some degree PaaS is still in its early stages and new services are regularly being rolled out and we can take advantage of these new services as they appear.

In our Atlanta data center we operate Sage 300 Cloud on a fixed set of servers that we purchased and as the usage grows we buy another server now and again. With a managed service like Azure we are paying for our infrastructure on a usage basis. However the system is elastic, meaning that we only need to run as many application servers as we need to handle the current load. So if usage is low during the weekend then we can shut down some application servers and save some money. Similarly if we get a big spike in usage we can easily add as many application servers as we need instantly to handle the load (perhaps at year end). Again basically we control all of this with a number of PowerShell Scripts.

Summary

Microsoft Azure is a very rich cloud environment. It provides us with a lot of benefits to help us beef up our Sage 300 ERP Cloud offering. But with the richness, there is quite a bit of work creating scripts to take advantage of it, tuning our application for this environment, as well as a learning curve learning the best ways to leverage all these new PaaS features.

Written by smist08

January 13, 2013 at 12:59 am

Choosing Between Cloud Providers

with one comment

Introduction

It seems that every day there are more cloud providers offering huge cloud based computing resources at low prices. The sort of Cloud providers that I’m talking about in this blog posting are the ones where you can host your application in multiple virtual machines and then the cloud service offers various extension APIs and services like BigData or SQL databases. The extension APIs are there to help you manage load and automatically provision and manage your application. The following are just a few of the major players:

  1. Amazon Web Services. This is the most popular and flexible service. There are many articles on how much web traffic is handles by AWS these days.
  2. Microsoft Azure. Originally a platform for .Net applications, it now supports general virtualization and non-Microsoft operating systems and programs.
  3. Rackspace. Originally a hardware provider, now offers full services with the OpenStack platform.
  4. VMWare. Originally just a virtualization provider, has now branched out to full cloud services.

There are many smaller specialty players as well like Hiroku for Ruby on Rails or the Google App Engine for Java applications. There are also a number of other large players like IBM, Dell and HP going after the general market.

All of these services are looking to easily host, provision and scale your application. They all cater to a large class of applications, whether hosting in the cloud a standard Windows desktop application, or providing the hardware support for a large distributed SaaS web application. Many of these services started out for specific market niches like Ruby or .Net, but have since expanded to be much more general. Generally people are following the work of Amazon to be able to deploy seamlessly anything running in a virtual machine over any number of servers that can scale according to demand.

Generally these services are very appealing for software companies. It is quite expensive and quite a lot of trouble maintaining your own data center. You have to man it 24×7, you are continually buying and maintaining hardware. You have to have these duplicated in different geographies with full failover. Generally quite a lot of activities that distract you from your main focus of developing software. Fewer and fewer web sites are maintaining their own data centers. Even large high volume sites like NetFlix or FourSquare run on Amazon Web Services.

Which to Choose?

So from these services which one do you choose, how do you go about choosing. This is a bit of game where the customer and the service provider have very different goals.

For a customer (software developer), you want the cheapest service that is the most reliable, high performance and easiest to use. Actually you would always like the cheapest, so if something else comes along, you would like to be easily able to move over. You might even want to choose two providers, so if one goes down then you are still running.

For the service provider, they would like to have you exclusively and to lock you in to their service. They would like to have you reliant on them and to attract you with an initial low price, which then they can easily raise, since switching providers is difficult. They would also like to have additional services that they can offer you down the road to increase your value to them as a customer.

OpenStack

Both Amazon and Azure look to lock you in by offering many proprietary services, which once you are using, makes switching to another service very difficult. These are valuable services, but as always you have to be careful as to whether they are a trap.

Amazon pretty much owns this market right now. New players have been having trouble entering the market. Rackspace suddenly realized that just providing outsourced hardware wasn’t sufficient anymore and that too much new business was going to Amazon. They realized that creating their own proprietary services in competition with Amazon probably wouldn’t work. Rackspace came up with the disruptive innovation of creating an open source cloud platform called OpenStack that it developed in conjunction with Nasa. They also realized that so many people were already invested in Amazon that they made it API compatible with several Amazon services.

OpenStack has been adopted by many other Cloud providers and there are 150 companies that are officially part of the OpenStack project.

This new approach has opened up a lot of opportunities for software companies. Previously to reduce lock-in to a given vendor, you had to keep you application in its own virtual image and then do a lot of the provisioning yourself. With this you can start to automate many processes and use cloud storage without suddenly locking yourself into a vendor or to have to maintain several different ways of doing things.

Advantages for Customers

With OpenStack, suddenly customers can start to really utilize the cloud as a utility like electricity. You can:

  1. Get better geographic coverage by using several providers.
  2. Get better fault tolerance. If one provider has an outage, your service is still available via another.
  3. Better utilize spot prices to host via the lowest cost provider and to dynamically switch providers as prices fluctuate.
  4. Have more power and flexibility when negotiating deals with providers.
  5. Go with the provider with the best service and switch as service levels fluctuate.

One thing that scares software companies is that as soon as they commit to one platform, then do a lot of work to support it, then suddenly have a new service appears that leapfrogs the previous services. Keeping up and switching become a major challenge. OpenStack starts to offer some hope in getting off this treadmill, or at least making running on this treadmill a bit easier.

Is OpenStack Ready?

At this point OpenStack doesn’t offer as many services as Azure or AWS. Its main appeal is flexibility. The key will be how well or the major companies backing OpenStack can work together to evolve the platform quickly and how strong their commitment is to keeping this platform open. For instance will we start to see proprietary extensions in various implementations, rather than committing back to the home open source project?

Amazon and Azure have one other advantage, and that is that they are subsidized by other businesses. For instance Amazon has to have all this server infrastructure anyway in order to handle the Christmas shopping rush on its web store. So it doesn’t really have to charge the full cost, any money it makes off AWS is really a bonus. By the same token Microsoft is madly trying to buy market share in this space. It is taking profits from its Windows and Office businesses and subsidizing Azure to offer very attractive pricing which is very hard to resist.

Apple uses this strategy for iCloud. iCloud runs on both Amazon and Azure. This way it isn’t locked into a single vendor. Has better performance in more regions. Won’t go down if one of these services goes down (like Azure did on Feb. 29). Generally we are seeing this strategy more and more as people don’t want to put their valuable eggs all in one basket.

Summary

With the sudden explosion of Cloud platform providers, suddenly there are huge opportunities for software developers to reduce costs and expand capabilities and reach. But how do you remain nimble and quick in this new world? OpenStack provides a great way to provide a basis for service and then allows people to easily move to new services and respond to the quickly changing cloud environment. It will be interested to see how the OpenStack players can effectively compete with the proprietary and currently subsidized offerings from Microsoft and Amazon. Within Sage we currently have products on all these platforms. SalesLogix cloud is on Amazon, SageCRM.com is on Rackspace and Sage 200 (UK) is on Azure. It’s interesting to see how these are all evolving.