Stephen Smith's Blog

Musings on Machine Learning…

Posts Tagged ‘cloud

Sage 300 Online

with 13 comments

Introduction

At our Sage Summit conference we officially rolled out our new Sage 300 Online service. Sage 300 ERP has had an online hosted version for over ten years. The new Sage 300 Online service is meant to radically modernize our cloud version. It is now hosted in Microsoft Azure, sign on is via SageID, and the virtualization software is vastly improved.

We support all the standard Sage 300 ERP Modules, along with integrations to other Sage applications like Sage Exchange and host a number of third party ISV products. Financial Reporting and Business Intelligence is provided via Sage Intelligence (Sage Alchemex).

Using the cloud version of Sage 300 means that you don’t need to maintain your own servers or other hardware, you don’t need to be responsible for backing up your data and you aren’t responsible for data center support. Further you aren’t responsible for maintaining and upgrading the software. We install all necessary hotfixes and product updates. We even perform major version upgrades for you, which can be a major cost saving.

Modern businesses tend to be distributed over many geographic locations with many employees working from home or working while on road trips. Using a cloud based ERP allows all these people to access the central cloud based ERP from any place with Internet access. This is much easier than maintaining expensive and slow VPN type networks.

First we’ll run through how the service looks and then talk about some of the various aspects of it.

Usage

To access and run Sage 300 Online, you go to an URL in your browser that redirects you to SageID.

sage300online1

You now enter your SageID credentials and are signed into the system. This leads to the following page which gives a list of programs that you are allowed to run.

sage300online2

Clicking on “Sage 300 Online” will then launch the Sage 300 Online desktop. Below is a screenshot of the Desktop run from the landing web page and then running the Order Entry UI.

sage300online3

Notice that it now looks exactly like you are running Sage 300 locally. You no longer run a terminal server client which presents a new Windows desktop inside your existing Windows desktop.

SageID

When you first sign up for the service, you provide an e-mail address which will become your SageID. This first user is the administrator for your company and can invite other users from a SageID administrative website to use the system. SageID will then be your login for our site.

Your SageID will be your login for all Sage Cloud services and is also connected to the back end billing systems, so we can provide one bill for all your services attached to your SageID. This includes the Sage Mobile applications unveiled at Sage Summit. This will make reporting and billing very simple for cloud users.

Microsoft Azure

Our Sage 300 Online service is hosted in Microsoft Azure. These are Microsoft’s cloud data centers with locations all over the world. We use Azure due to its reliability and redundancy. If something were to happen to one data center we could operate out of another. Also the Microsoft network that connects these data centers to the Internet is extremely fast and have very low latency leading to great performance.

Microsoft Azure supports both PaaS and IaaS infrastructures. We use PaaS infrastructure for the initial web pages and for the databases. We use IaaS infrastructure to run Sage 300 ERP in a virtualized environment.

Microsoft Azure allows us to offer our cloud solutions to all our customers no matter where in the world they are located from data centers that are in their region or close to their region. Currently Microsoft has Azure Data Centers in the US, Ireland, Hong Kong, Singapore and Amsterdam with locations opening soon in Sydney and Melbourne.

Microsoft Virtualization Framework

Within the Azure environment we run a standard Microsoft virtualization environment that Microsoft has created for any datacenter. This is run on a number of Windows Server 2012’s. The virtualization environment is accessed via RDP8 which supports our SageID integration, supports virtualizing just the applications (rather than a Windows Desktop login), supports expected functionality like copy/paste, supports printing to local printers and supports transferring files to and from the cloud environment.

Microsoft Azure SQL

We run the Sage 300 company and system databases on Azure SQL servers which are a PaaS implementation of SQL Server. This database provides reliability performing every write to three separate places. Further as Microsoft develops out their cloud roadmap, many new features and services are planned for Azure SQL to improve the general robustness of cloud solutions.

ISVs

With our initial offering, we don’t support all Sage 300 third party solutions, but we do provide a set of some of the most popular. Generally the third party solution has to be able to run in the cloud which usually means it must be written with the Sage 300 SDK (the exception to make the rule is PrintBoss).

The third party products we support are: Orchid EFT and RMA, Pacific Technologies Purchasing Workflow and Funds Availability Wellspring PrintBoss, Aatrix Payroll Reports and Avalara Sales Tax.

Customization

Since all companies using this cloud solution are running the same Sage 300 programs, you cannot customize the system by installing any EXEs or DLLs, since then these would be used by all companies subscribing to the system. Similarly security is a much bigger concern in the cloud and we have to carefully regulate what can get into the system.

Also since this is a multi-tenant environment we can’t allow any arbitrary third party solution to be installed. For any ISV that wants to participate in this offering, we need to verify that their solution will run in our Azure setup correctly and not cause any problems for other tenants.

Summary

We’ve started letting people use our Sage 300 Online offering with a Sage Summit Preview program. Then in a couple of months we’ll be making it officially generally available. The Microsoft Azure cloud gives us much more scalability and global reach than we previously had. Integrating with SageID and using newer virtualization technology greatly improves the usability and convenience of our product. We are very excited about this project and are looking forward to adding to it as we develop it forward.

The Sage Hybrid Cloud

with 3 comments

Introduction

We introduced the concept of the Sage Hybrid Cloud along with a number of connected services at our Sage Summit conference back in August. This is intended to be a cloud based platform that greatly augments our on-premises business applications.

This blog posting will look at this platform in a bit more depth. Keep in mind that this platform is still under rapid development and that things are changing rapidly. If we think of better ways to do things, we will. We are approaching this with an Agile/Startup mentality, so we aren’t going to go off for years and years and develop this platform in a vacuum. We will be developing the functionality as we need it, for our real applications. This way we won’t spend time developing infrastructure that no one ends up using. Plus we will get feedback quicker on what is needed, since we will be releasing in quick cycles.

The Hybrid Cloud Platform

Below is a diagram showing the overall architecture of this platform. We have a number of cloud services hosted in the MS Azure cloud. We have a number of Sage business applications with a connector to this cloud. Then we have a number of mobile/web applications built on top of this hybrid cloud platform. Notice that pieces of this platform are already in use, with Sage Construction Anywhere (SCA) being a released product and then Sage 300 CRE already having a connector to this cloud to support the SCA mobile application.

The purple box at the bottom represents our current APIs and access methods, and just re-iterates that these are still present and being used.

The red box indicates that we will be hosting ERPs in this environment in a similar manner to our current cloud offerings like Sage300Online.com. We’ll talk about this in much more detail in future blog posts. But consider this Sage hosted applications version 2.0.

Mobile Applications

We demo’ed a number of mobile applications that we have under development at Summit, some screenshots are here. We are working hard to make these applications provide a first class user experience. We are developing these in various technologies and combinations of technologies to drive the user experience to be the best possible. We are writing both HTML5/JavaScript applications using the Argos-SDK, along with writing applications as native iOS, Windows 8 Metro and Android applications. Plus there are technologies that allow use to combine these technologies to use them both where they make sense in an application.

These mobile applications aren’t just current ERP screens ported to mobile/web technologies, they are whole new applications that didn’t exist before these powerful mobile devices came along to enable these ideas.

ERP Connectors

Each ERP needs to connect to the Hybrid cloud, this is to upload files for items that are needed for lookup in the cloud devices like for finders. As well as to download transactions to enter into the ERP on the connected application’s behalf. The intent is to have one connector for each business application, rather than having to install and configure a separate connector for each connected service (which we hope there will be dozens of).

We want to keep the TCO of the solution as low as possible. To this end we don’t want the end user to have to configure any firewalls, DMZ or web servers. The connector will only call out to the cloud platform. There will never be calls into the connector.  Additionally you only need to configure the connector once with your SageID and away you go.

The connector will use SData Synchronization to synchronize the various files. This way it doesn’t matter if your on-premises ERP is off-line, it will catch up later. This makes the system much more robust since your mobile users can keep working even if you turn all your computers off completely.

SData

We will use SData as the communications mechanism from the hybrid cloud. The cloud will host a large set of SData feeds to be used either by the mobile and web applications or by the on-premises ERP connectors.

Since SData is based on industry standards like REST, Atom, RSS and such, it means it’s easy of pretty much any web or mobile based framework to easily use it. All modern toolkits have this support built in. Plus we provide SDKs like the Argos-SDK that have extra SData support built in.

ISVs

The intent will be that ISVs can use the SData feeds from the Hybrid Cloud as well to develop their own applications or to connect existing cloud based applications to all our Sage business applications. However we won’t start out with a complete database model, we will basically be adding to this cloud data model as we require things for our Sage developed solutions as well as for select ISVs. The intent is to get common functionality going first and then fill it in with the more obscure details later. For instance most connected services will need to access common master files like customers, vendors and items. Then most connected services will need to enter common documents like orders and invoices.

The feeling is that most integrations to ERP systems actually don’t access that many things. So the hope is that once the most common master files are synchronized and once the system accepts the most common transactions, then a great number of applications will be possible.

There will also be parts of the cloud database that don’t have any corresponding part in the ERP. There will be a fair bit of data that resides entirely in the cloud that is specific to the cloud portions of these applications.

SageID

When you are signing on to all these various connected services, we don’t want you to need a separate login id and password for each one. We would like you to register a user-id and password with Sage once and then use that identity for accessing every Sage connected service.

Ultimately we would like this to be the user id and password that you use to sign-on to our on-premises applications as well. Then this would be your one identity for all Sage on-premises and cloud applications. Then all your access rights and roles would be associated with this one identity.

Summary

The Sage Hybrid Cloud is an exciting project. The concept is that it’s starting small with the Sage Construction Anywhere product already shipping and then going to develop quickly as we add other services. This should go quickly since we are leveraging the R&D resources of many Sage products to get new exciting mobile products into market quickly spanning the customer base of many Sage business applications.

Devices and the Cloud

leave a comment »

Introduction

We are currently seeing a proliferation of devices being release from a new line of Tablets from Amazon, new tablets and phones from Samsung, the iPhone 5 from Apple and a certain amount of anticipation for all the Windows 8 based devices that should be coming next month.

Amazon is quickly becoming a major player in the tablet market; they have branched out from just delivering e-book readers to having a complete line of high performance tablets at very competitive prices. Amazon then makes most of its money selling books, music and movies to the owners of these tablets. These are Android based tablets that have been setup and configured with a fair bit of extra Amazon software to integrate seamlessly with the Amazon store.

In the Android world, Samsung continues to deliver exceptional phones and tables of all shapes and sizes.

Apple has just shipped the iPhone 5 and we would expect new iPads sometime early next year.

Meanwhile Microsoft has released to manufacturing Windows 8 and devices based on this should be appearing on or after October 26.

With each generation of these devices we are getting faster processors, more memory, higher resolution displays, better sound quality, graphics co-processors, faster communications speeds and a plethora of sensors.

With so many players and with the stakes so high (probably in the trillions of dollars), competition is incredibly intense. Companies are competing in good ways, producing incredible products at very low prices and driving an incredible pace of innovation. People are also competing in rather negative ways with very negative attacks, monopolistic practices, government lobbying and high levels of patent litigation. The good news is that the negative practices don’t seem to be blunting the extreme innovation we are seeing. We are getting used to being truly amazed with each new product announcement from all these vendors. Expectations are getting set very high at each product announcement and launch event. There is collateral damage with once powerful companies like RIM or Nokia making a misstep and being left in the dust. But overall the industry is growing at an incredible pace.

Amazing new applications are being developed and released at a frenetic pace for all these devices. They know where you are, what you are doing and offer advice, tips or provide other information. There are now thousands of ways to communicate and share information. They all have deep integrations with all the main social media services. They communicate with natural language and high levels of intelligence.

The Data Cloud

Many of these devices are extremely powerful computers in their own right. The levels of miniaturization we’ve achieved is truly astounding. These devices are all very different, but what they share is that they are all connected to the Internet.

We now access all our data and the Internet from many devices, depending on what we are doing. We many have a powerful server that we do processing on, we may have a powerful desktop computer with large screen monitors, we may have a laptop that we use at work or at home, we may have a tablet computer that we use when travelling or at the coffee shop, we all have smart phones that we rarely use to phone people with, we may have a smart TV, we may have a MP3 player and so on. The upshot is that we no longer use exclusively use one computing device. We now typically own and regularly use several computing devices; it seems most people have a work computer, a home computer, a tablet and at least one smart phone. People want to be able to do various work related tasks from any of these. So how do we do this? How do we work on a document at work, then later we get a thought and want to quickly update it from our phone? Most of these devices aren’t on our corporate LAN. Typically we are connecting to the Internet via Wifi or via a cell phone network. The answer is that we are no longer storing these documents on a single given computer. We are now storing these to a centralized secure cloud storage (like iCloud, GDrive, or DropBox). These clouds are a great way to access what we are working on from any network and any device. Then each device has an App that is optimized for that device to offer the best way possible to work on the document in the given context. Many of these services like Google Apps even let multiple people work on these documents at once completely seamlessly.

Further many of these devices can keep working on our documents when we are off-line. For instance working on something on an iPad while on an airplane. Then when the device is back on-line it can synchronize any local data with the master copy in the data cloud. So the master copy is in the cloud, but there are copies off on many devices that are going on-line and off-line. Then modern synchronization software keeps then all in sync and up to date. Google had an interesting add for its Chrome notebooks where they keep getting destroyed, but the person in the add just keeps getting handed a new one, logs in, and continues working from where he left off.

The Grid

What we are ending up with is a powerful grid of computing devices accessing and manipulating our data. We have powerful servers in data centers (located anywhere) doing complex analytics and business processes, we have powerful laptop and tablet computers that can do quite powerful data input, editing and manipulation. Then we have small connected devices like phones that are great for quick inquiries or for making smaller changes. Our complete system as a whole consists of dozens of powerful computing devices all acting on a central data cloud to run our businesses.

In olden days we had mainframe computing where people connected to a mainframe computer through dumb terminals and all processing was done by the mainframe computer that was guarded and maintained by the IS department. Then the PC came along and disrupted that model. We then had separate PCs doing their own thing independent from the central mainframe and independent from the IS department. Eventually this anarchy got corralled and brought under control with networks and things like domain policies. Then we took a turn back to the mainframe days with Web based SaaS applications. Rather than run on the corporate data center, these run in the software vendors datacenter and the dumb terminal is replaced by the Web Browser. This then re-centralized computing again. Now this model is being disrupted again with mobile devices. Where the computing power is back in the hands of the device owners who now controls what they are doing once again.

The difference now from the PC revolution is that everything is connected and out of this highly connected vast grid of very powerful computing devices we are going to see all new applications and all new ways of doing things. It’s interesting how quickly these disruptive waves are happening.

Summary

It’s really amazing what we are now taking for granted, things like voice recognition, you can just ask your phone a question and actually get back the correct answer. The ability to get a street view look at any location in the world. It’s an exciting time in the technology segment with no end in sight.

In this articles I was referring to documents, which most people would associate with spreadsheets or word processing documents. But everything talked about here can equally apply to all the documents in an ERP or CRM system such as Orders, Invoices, Receipts, Shipments, etc. We will start to see the same sort of distributed collaborative systems making it over to this space as well.

Written by smist08

September 22, 2012 at 8:07 pm

Choosing Between Cloud Providers

with one comment

Introduction

It seems that every day there are more cloud providers offering huge cloud based computing resources at low prices. The sort of Cloud providers that I’m talking about in this blog posting are the ones where you can host your application in multiple virtual machines and then the cloud service offers various extension APIs and services like BigData or SQL databases. The extension APIs are there to help you manage load and automatically provision and manage your application. The following are just a few of the major players:

  1. Amazon Web Services. This is the most popular and flexible service. There are many articles on how much web traffic is handles by AWS these days.
  2. Microsoft Azure. Originally a platform for .Net applications, it now supports general virtualization and non-Microsoft operating systems and programs.
  3. Rackspace. Originally a hardware provider, now offers full services with the OpenStack platform.
  4. VMWare. Originally just a virtualization provider, has now branched out to full cloud services.

There are many smaller specialty players as well like Hiroku for Ruby on Rails or the Google App Engine for Java applications. There are also a number of other large players like IBM, Dell and HP going after the general market.

All of these services are looking to easily host, provision and scale your application. They all cater to a large class of applications, whether hosting in the cloud a standard Windows desktop application, or providing the hardware support for a large distributed SaaS web application. Many of these services started out for specific market niches like Ruby or .Net, but have since expanded to be much more general. Generally people are following the work of Amazon to be able to deploy seamlessly anything running in a virtual machine over any number of servers that can scale according to demand.

Generally these services are very appealing for software companies. It is quite expensive and quite a lot of trouble maintaining your own data center. You have to man it 24×7, you are continually buying and maintaining hardware. You have to have these duplicated in different geographies with full failover. Generally quite a lot of activities that distract you from your main focus of developing software. Fewer and fewer web sites are maintaining their own data centers. Even large high volume sites like NetFlix or FourSquare run on Amazon Web Services.

Which to Choose?

So from these services which one do you choose, how do you go about choosing. This is a bit of game where the customer and the service provider have very different goals.

For a customer (software developer), you want the cheapest service that is the most reliable, high performance and easiest to use. Actually you would always like the cheapest, so if something else comes along, you would like to be easily able to move over. You might even want to choose two providers, so if one goes down then you are still running.

For the service provider, they would like to have you exclusively and to lock you in to their service. They would like to have you reliant on them and to attract you with an initial low price, which then they can easily raise, since switching providers is difficult. They would also like to have additional services that they can offer you down the road to increase your value to them as a customer.

OpenStack

Both Amazon and Azure look to lock you in by offering many proprietary services, which once you are using, makes switching to another service very difficult. These are valuable services, but as always you have to be careful as to whether they are a trap.

Amazon pretty much owns this market right now. New players have been having trouble entering the market. Rackspace suddenly realized that just providing outsourced hardware wasn’t sufficient anymore and that too much new business was going to Amazon. They realized that creating their own proprietary services in competition with Amazon probably wouldn’t work. Rackspace came up with the disruptive innovation of creating an open source cloud platform called OpenStack that it developed in conjunction with Nasa. They also realized that so many people were already invested in Amazon that they made it API compatible with several Amazon services.

OpenStack has been adopted by many other Cloud providers and there are 150 companies that are officially part of the OpenStack project.

This new approach has opened up a lot of opportunities for software companies. Previously to reduce lock-in to a given vendor, you had to keep you application in its own virtual image and then do a lot of the provisioning yourself. With this you can start to automate many processes and use cloud storage without suddenly locking yourself into a vendor or to have to maintain several different ways of doing things.

Advantages for Customers

With OpenStack, suddenly customers can start to really utilize the cloud as a utility like electricity. You can:

  1. Get better geographic coverage by using several providers.
  2. Get better fault tolerance. If one provider has an outage, your service is still available via another.
  3. Better utilize spot prices to host via the lowest cost provider and to dynamically switch providers as prices fluctuate.
  4. Have more power and flexibility when negotiating deals with providers.
  5. Go with the provider with the best service and switch as service levels fluctuate.

One thing that scares software companies is that as soon as they commit to one platform, then do a lot of work to support it, then suddenly have a new service appears that leapfrogs the previous services. Keeping up and switching become a major challenge. OpenStack starts to offer some hope in getting off this treadmill, or at least making running on this treadmill a bit easier.

Is OpenStack Ready?

At this point OpenStack doesn’t offer as many services as Azure or AWS. Its main appeal is flexibility. The key will be how well or the major companies backing OpenStack can work together to evolve the platform quickly and how strong their commitment is to keeping this platform open. For instance will we start to see proprietary extensions in various implementations, rather than committing back to the home open source project?

Amazon and Azure have one other advantage, and that is that they are subsidized by other businesses. For instance Amazon has to have all this server infrastructure anyway in order to handle the Christmas shopping rush on its web store. So it doesn’t really have to charge the full cost, any money it makes off AWS is really a bonus. By the same token Microsoft is madly trying to buy market share in this space. It is taking profits from its Windows and Office businesses and subsidizing Azure to offer very attractive pricing which is very hard to resist.

Apple uses this strategy for iCloud. iCloud runs on both Amazon and Azure. This way it isn’t locked into a single vendor. Has better performance in more regions. Won’t go down if one of these services goes down (like Azure did on Feb. 29). Generally we are seeing this strategy more and more as people don’t want to put their valuable eggs all in one basket.

Summary

With the sudden explosion of Cloud platform providers, suddenly there are huge opportunities for software developers to reduce costs and expand capabilities and reach. But how do you remain nimble and quick in this new world? OpenStack provides a great way to provide a basis for service and then allows people to easily move to new services and respond to the quickly changing cloud environment. It will be interested to see how the OpenStack players can effectively compete with the proprietary and currently subsidized offerings from Microsoft and Amazon. Within Sage we currently have products on all these platforms. SalesLogix cloud is on Amazon, SageCRM.com is on Rackspace and Sage 200 (UK) is on Azure. It’s interesting to see how these are all evolving.

Accpac on the Amazon Cloud

with 10 comments

Introduction

The Amazon Elastic Compute Cloud (EC2) (http://en.wikipedia.org/wiki/Amazon_Elastic_Compute_Cloud) is a service offered by Amazon.com that allows people to rent virtual computers to run applications on. Some of the innovations offered by this solution include:

  • Very easy to get started, you just need an Amazon account, attach it to EC2, and off you go.
  • Very inexpensive, with a good (nearly) free trial (http://aws.amazon.com/ec2/pricing/).
  • Scalable and expandable depending on your needs.

Often the simplicity of getting started with this solution gets lost, since people are usually confronted with the advanced features at the beginning, which you don’t need to worry about until later. Just be re-assured that this is a solution that can grow with you. Below is a diagram of some of the services offered:

In this blog posting we will look at how to deploy Accpac on the Amazon EC2 cloud and discuss some of the trade-offs and choices that can be made along the way.

Terminology

One thing that makes using Amazon EC2 intimidating is the terminology. But here is a quick guide to the key points.

  • Amazon Machine Image (AMI) – These are virtual machine snapshots that you take as a starting point to doing work. Amazon provides a number of these as starting points, there are a number of public ones offered by other people plus you can create your own. Basically when you want a new virtual machine you take one of these as your starting point.
  • Instances – You create an instance from an AMI and the instance is the virtual machine that you actually run. When you specify the instance you specify the resources it has including memory, disk space and computing power. For more on the instance types see: http://aws.amazon.com/ec2/instance-types/.

You manage all these things from the Amazon Management Console:

Deploying Accpac

Deploying Accpac to Amazon EC2 is fairly straight forward. You just need to select a starting virtual image (AMI) of something that Accpac supports, create an instance of that, run the instance, install and configure Accpac into that image and off you go. There are a couple of “gotchas” to watch out for that we will highlight along the way.

  1. Go to http://aws.amazon.com/ec2/ and sign up for an account.
  2. Run the AWS Management Console (https://console.aws.amazon.com/ec2) and create a PKI security key pair. You will need to do this before doing anything else. This will be the security token you use to connect to your virtual image running on EC2.
  3. On the upper left of the management console, make sure it is set to the region that is closest to you like perhaps “US West”.
  4. Click the “Launch Instance” button on the AWS Management Console. You will now be prompted to choose a starting AMI. A good one to choose is: “Getting Started on Microsoft Windows Server 2008” from the Quick Start tab. This one has IIS and SQL Server Express Installed.
  5. Select “Small” for the instance type, unless you know you will need more resources quickly. Then accept the defaults for the advanced instance options. Same for the tags screen (i.e. you probably don’t need any).
  6. On the “create key pair” screen, select the key you created in step 2 (or if you skipped that then you need to create a pair now).
  7. On the configure firewall screen, remove the opening for SQL Server, you don’t need this. The only two holes in the firewall should be RDP and HTTP. If you are hosting client data, then you should add HTTPS and setup Accpac to use that (see https://smist08.wordpress.com/2010/11/20/setting-up-sage-erp-accpac-6-0a-securely/).
  8. Now you can review your settings and Launch your instance. It can take 15 minutes or so for the instance to launch, mostly due to the time it takes Windows Server 2008 to boot. So this is a good time to go get a coffee.

At this point we have created a virtual image and have it running. From the AWS Management Console EC2 dashboard, we should see one running instance. We should also see 1 EBS volume. The EBS volume is the disk image of your virtual image. If you want to you can create snapshots of your EBS volume (you have to pay to store these) so you can go back to them if you mess up your image. So now we have our own Windows 2008 server running in the Amazon cloud, great, but now what do we do? How do we connect to it? How do we transfer files to it? How do we browse to it? What are the Administrator and SQL Server passwords? Now we’ll go through the steps of getting the Administrator Password, connecting via RDP and installing Accpac.

  1. Select the instance that you have running in the management console. From the instance actions menu, choose “Get Windows Admin Password”.  If this doesn’t work, you may need to give the instance a bit more time to start. You will get a dialog that wants you to take the file you downloaded back at step 2, load it into notepad and copy/paste its contents into this dialog. Then this dialog will go off and do a long cryptographic calculation and tell you the Windows Password.
  2. Now you can run Remote Desktop and connect to your instance (if you choose Connect from the instance menu it will download a file that will start RDP with the right parameters). Use the public DNS as the computer name (from the pane with the instance details below the instance list). Administrator is the login. Be careful because copy/pasting the password can be difficult because Windows tends to add a space when you copy the password. If copy/paste doesn’t work, try just typing the password. Now you are logged in and running. Perhaps the first thing you want to do is change the Administrator password to something easier to type and remember. Now you can treat this virtual Windows Server 2008 just like any other remote server.
  3. Copy the installation image for Accpac onto the virtual machine. You can use an FTP site or any other file copy mechanism to do this. On convenient method that Windows 7 has is that RDP can make local drives accessible to the remote computer. If you choose Options – Local Resources you can expose some drives to the remote computer and then they will show up in Windows Explorer there.
  4. Now we need to enable SQL Server, by default the service is disabled and authentication is set to Windows Authentication only. Go to Admin Services – Services and set the SQL Server services to Automatic and start them. In the SQL Server configuration manager enable TCP/IP and set the port to 1433. In the management console set the authentication to SQL Server and authentication, then go to the sa user and enable it. Now restart the SQL Server service. Create your Accpac databases such as PORTAL, SAMSYS, SAMINC, SAMLTD, …
  5. Run the Accpac installation you copied into the image and perform the usual steps to get Accpac up and running. When running database setup, make sure you use localhost as the server name and not the current Windows instance name, because this will change each time you run the image.

We now have Accpac up and running and can access Accpac via RDP. To access the Portal use the public DNS as the server name in the usual URL for running the portal:

http://<public_DNS_goes_here>/SageERPAccpac/portal60a/portal.html

Viola you are running in the cloud.

If you shutdown this instance and restart it, you will get a new computer name and a new public DNS. This can be rather annoying if you like to setup Browser shortcuts and such. If you want to avoid this you need to allocate a static Elastic IP address from the AWS (doing this costs a small amount of money). Then you can associate this IP address with the instance and now it will stick. Further you could purchase a meaningful URL and associate it with this IP address. If you don’t want to purchase a URL, another trick is to use TinyURL.com to generate a URL for your IP address. This isn’t a very meaningful URL but it’s better than the raw IP address.

How Well Does It Run?

Once running, how does it compare to a local server? With the small configuration you are limited a bit in memory. It seems that running the Sage ERP Accpac 6.0A portal on the virtual image in a RDP image is a bit slow.  However running the browser locally and hitting the server remotely is quite quick. This implies that the small image is sufficient for the server processes for a few users. However you will need to increase the memory and/or the processing power for more. The nice thing with Amazon is that you can change this fairly easily and only pay for what you are using. It also shows that the Amazon datacenters have quite good network latency, probably better than you can get hosting yourself for remote users.

Going Production

So can you go production with this? Certainly the platform can support it. The current sticking point is terminal server or Citrix licenses. These are available through various programs such as: http://community.citrix.com/pages/viewpage.action?pageId=141100352. However you need to be part of one of these Microsoft or Citrix programs where they give you specific permission to migrate your licenses to EC2. While we still have Windows Desktop components this is a potential sticking point. However once Sage ERP Accpac 6.1A comes out and we can run all the main accounting applications through the web, then this problem goes away.

Amazon is also addressing other compliance type concerns, for instance achieving PCI DSS Level 1 Compliance (http://aws.amazon.com/security/pci-dss-level-1-compliance-faqs/?ref_=pe_8050_17986660) and ISO 27001 Certification (http://aws.amazon.com/security/iso-27001-certification-faqs/?ref_=pe_8050_17986660). Receiving these sort of certifications remove a lot of obstacles to using Amazon for a production environment.

Also if you want to back up your data locally then you will need to copy a backup of your SQL Server database over the Internet which could be quite time consuming, but you can let it run in the background.

Summary

Amazon’s EC2 Service offers an excellent way to access extra computing resources at a very low cost. You can deploy services to regions around the world and dynamically adjust the computing resources you are using. For developers this is a very cheap way to obtain access to test servers when in development. For partner this is an excellent way to establish demo servers. For education this is an excellent method to learn how to work with different operating systems and to practice installations.

Written by smist08

December 17, 2010 at 9:41 pm

Accpac in the Cloud

with 8 comments

It used to be that running in the cloud (http://en.wikipedia.org/wiki/Cloud_computing) meant using a generic SaaS based web application, but now a days technology advances have offered many choices beyond this basic option. Virtualization technologies have advanced in leaps and bounds. Hosted solutions providers now offer many innovative options based on both regular and virtualized solutions. Plus the capabilities of SaaS web applications have improved quite a bit. Today Accpac has several cloud based solutions available and in the future we will have a full spectrum of solutions. Combining Terminal Services or Citrix with Virtualization is now a very powerful method of economically hosting enterprise applications. It is at the point where you can host your Windows based Enterprise application in the cloud and access it from an iPad (by running the Citrix client application http://www.citrix.com/English/NE/news/news.asp?newsID=1864354).

In some ways this wealth of riches has led to some confusion. There are so many options on how to deploy an application that it gets quite difficult to wade through all the choices and all the various claims being made by various hosting and application vendors. This blog post will attempt to outline the main choices and trade-offs, along with pointing out the various places that Sage ERP Accpac plays now and will play in the future.

Cloud Categories

The cloud usually means that a client is running an application on a remote server via the Internet. There are many ways this can be achieved. Many of the options come down to who owns what (you or the vendor) and what type of application is being talked about (Windows or Web based). However the following are the 4 main categories.

  1. Hosted Server. Server is owned by customer and maintained in a central data center but accessed via the Internet.  This is usually a Windows based application and the customer owns everything (server and software), but is outsourcing the physical care of the server and often other services like backup. A typical company in this area is Rackspace (http://www.rackspace.com/index.php).
  2. Shared Virtual Server. A server farm is owned by the data center vendor and the client owns virtual server images that run on that farm. Here the customer owns the software (usually Windows based), but the data center owns the hardware. Typical of this is the Amazon EC2 service (http://aws.amazon.com/ec2/).
  3. Single Tenant SaaS. Vendor owns servers and images. Runs each client in their own image (whether virtual or multiple processes). Here the application is usually Windows based (but often we see web based applications here also). AccpacOnline is a good example of this.
  4. Multi-tenant SaaS. Vendor owns everything, client runs within a shared image or process (usually distributed over many servers).Here the application is always a Web application.

Any of these could be running web applications or desktop applications, but we highlight the most common cases above. The costs and who pays them is different in each situation and there are pros and cons to each. The differences between these are often blurred as companies offer hybrid solutions. Also some vendors try to define a category exactly as they implement it and claim anyone that does it differently is incorrect, but like anything there are always many choices.

Customer Goals

So why are customers demanding cloud based solutions rather than just buying software and installing it on premise? Below are some of the goals that clients are trying to achieve:

  1. Save costs by not having a data center. Save on requiring extra air conditioning, backup power supplies, hard disk backup and space. Save on routine maintenance like performing backups.
  2. Save capital equipment costs by not purchasing hardware which devalues quickly and needs to be replaced often. Have fixed constant monthly expense instead.
  3. Save capital software purchase expenses. Don’t want large up-front purchase, would rather pay much smaller monthly fee. Even if this is more expensive over x years, doesn’t require initial outlay. (Some on-premise software can now be “rented” as another solution to this).
  4. Save HR expenses by not needing to hire IT staff to run and manage corporate hardware and software.
  5. Don’t have to perform software installation or maintenance.
  6. Ability to access their applications safely and securely from anywhere in the world whether through laptops or other mobile devices.
  7. Desire to use a modern web-based application which has the ease of use of Facebook or Amazon.

When considering cloud solutions there are a number of problems that are usually cited that clients try to avoid:

  1. Lack of customization in cloud offerings.
  2. Lack of ability to use/integrate other software.
  3. Lack of owner ship of data. Lack of ability to get their data. What happens if the cloud vendor goes out of business?
  4. Security concerns, how safe and private is their data?

All solutions have an answer to these; customers just have to determine if those answers are sufficient, cost effective and achievable.

Vendor Goals

To be fair, many vendors are pushing cloud solutions quite strongly, and this isn’t for purely altruistic reasons. What are the vendors trying to achieve? Below are the goals that vendors are trying to achieve:

  1. Obtain a more steady cash flow. Steady subscription revenue, rather than relying on less frequent large purchases. Easier for financial planning and more recession proof.
  2. Obtain access to a larger market by being able to server the world from a single location.
  3. Can organize so only need to support one version of the software, reducing costs. Similarly you only need to support one hardware/operating system environment.
  4. Can have more direct contact with customers since you are sharing an operating environment – more control of ECE (Extraordinary Customer Experience).
  5. For hosting only vendors that aren’t software vendors as well, then this is their entire business, so of course they are selling it as hard as they can.

Sage ERP Accpac in the Cloud

Now let’s look at the options for Accpac in the cloud. At the beginning of this article we looked at four cloud categories, now we’ll look into how Accpac serves these four categories.

  1. Anyone can do category 1. This is basically just installing an on-premise application like Sage ERP Accpac 5.6A on a terminal server and then physically moving that server to a hosting provider for care and feeding. Then clients access the server either using RDP or Citrix client.
  2. This option is very similar to category 2, except rather than install on a physical server you install your software into a virtual image either provided or specified by the hosting vendor. Then you transfer this image to the vendor who runs and maintains it. If the vendor is associated with Sage or Accpac already, they may provide a virtual environment with Accpac already installed.
  3. Both Accpac and SageCRM have operated single tenant SaaS environments for some time with www.accpaconline.com and www.sagecrm.com.  Accpac operates the desktop version of Accpac in a SaaS manner using Citrix to manage things, but each client runs in their own separate server memory space when running. SageCRM is web based but requires a unique instance of SageCRM running for each customer. Here you pay by the month, can run VBA macros, Microsoft Office and a selection of ISV products.
  4. Sage ERP Accpac 6.x has better multi-tenancy support than the current AccpacOnline because all clients run in the same server processes and share the same server memory. The goal is that once we have moved all the accounting screens to be true web based screens (after 6.1A), then we can deploy them as a SaaS solution. As part of the development of the infrastructure of Accpac 6.x, we ensured we plumbed in the support for a multi-tenanted deployment of this type. Once we enter this world we will be a try true web based application that doesn’t require Terminal Server, Citrix or Virtualization Technologies to run. It will be a modern web based application that you can run either on-premise or as a true SaaS web application.

Summary

It used to be that a big competitive advantage of a true web based SaaS solution was a much lower cost by being able to run far more users per server. However with all the advances in Virtualization Technology and Terminal Server/Citrix a lot of this gap has been narrowed, making solutions of this type very cost competitive.

As we can see Accpac already has many options for cloud deployment that achieves many of the customer’s goals in considering a cloud application. Then in the future we go beyond this to offer a true Web Based SaaS solution. As these technologies progress the TCO (Total Cost of Ownership) will come down as we can host more users per server. For clients it becomes easier to get customized solutions in the cloud with all the attendant cost savings, including better usability and accessibility.

Written by smist08

December 4, 2010 at 6:30 pm